Decentralized computer … Akash

Akash Network

Many different crypto project are running decentralized computing as it’s a good piece to complete the distributed storage or messaging I previously documented. The principle is to be able to deploy a workload on someone else computed to serve a reliable web service. This layer is a IaaS layer like we can find on Azure, Aws, GCP, Ovh… This layer is an important piece of a decentralized cloud in construction in the web3.

In this blog post, I’ll review some of the projects like Flux or Golem and explain how to be part and use the one that seems the more reliable. As there are many different solutions and it has been a bit long to test all and publish them in a single blog post, this will be split into different one. The first one is about Akash.

Akash has been created in 2018, It uses the Proof-of-Stake for mining the token and has a limited impact in term of carbon emission and energy consumption for its contract processing. To stake AKT, you need to go to some specific platforms managing the validators, where you can delegate your token for stacking. You may be able to run a Validator node also with a stacking of 1000 AKT ($260). The reward APY is about 16% but 4% adjusted with the inflation of the network supply.

Akash token is AKT, 100M token has been pre-minted and about 3M are minted per month currently. This is decreasing month after month to have a total supply at 388M. Token price is currently $0,26.

The current infrastructure size is 3862 core, 20TB ram, 252TB Hdd, this is a small network corresponding to about 25 bare metal data-center type servers. It’s currently a bit bigger than golem network.

It run a portfolio of 40 dApp mostly blockchain oriented, but you can deploy your own workload and this is really the plus of Akash compared to the other I have been tested.

Use Akash to host your container

To deploy any workload on aksah, you can to use the Cloudmos Deploy application existing on Mac, Windows, Linux. You need to create a wallet and put some AKT on it. I recommend a minimum of 6AKT as you need to deposit 5AKT when start running an instance and you need to register a certificate and it cost 0.00xx AKT

You need to create a certificate and publish it on the blockchain before being able to create a new deployment, this has a little cost.

For the running cost, you select you own cost (if too low, I assume no-one will accept to run it), usual price is 1uAKT per block for 0.5CPU, 512MB ram + storage. The cost is in uAKT (1_000_000 uAKT = 1 AKT) and is per block time. Block time are about 7s, about 4.5M blocks / year and 375k every month.

As I’m used to write, having a price in the token itself and not a stable price with a burn & mint equilibrium makes the project complex to use overtime. Regarding the token value and the evolution of the demand, you will have to update your offer and eventually redeploy your workload. This is impacting your day-to-day operations.

Then you define your container, sizing and pricing option in a SDL (Stack Definition Language) format. Let’s take a simple example to run a VM like.

version: "2.0"
services:
  web:
    # "latest" can't be used 
    image: hermsi/alpine-sshd:9.0_p1-r2
    expose:
      - port: 22
        to:
          - global: true
    env:
      - ROOT_PASSWORD=password
profiles:
  compute:
    web:
      resources:
        cpu:
          units: 0.5
        memory:
          size: 512Mi
        storage:
          size: 512Mi
  placement:
    dcloud:
      pricing:
        web:
          denom: uakt
          amount: 100
deployment:
  web:
    dcloud:
      profile: web
      count: 1

The service will later propose different offer for running your container. I’ve been able to find VM at $0,34 a month for this workload. You can update your deployment script but honestly, I’m not sure it works well. The port 22 will be dynamically assigned by the server and you will get it on the deployment screen.

web: [Normal] [ScalingReplicaSet] [Deployment] Scaled up replica set web-65f6b5b78 to 1
web: [Normal] [SuccessfulCreate] [ReplicaSet] Created pod: web-65f6b5b78-mnpbr
web: [Normal] [Scheduled] [Pod] Successfully assigned mkgontsjhglm9f4t6qjueddf4erq9j4oaeto8ljim4jdq/web-65f6b5b78-mnpbr to node4
web: [Normal] [Pulling] [Pod] Pulling image "gotechnies/alpine-ssh:helm-chart"
web: [Normal] [Pulled] [Pod] Successfully pulled image "gotechnies/alpine-ssh:helm-chart" in 7.340065568s
web: [Normal] [Created] [Pod] Created container web
web: [Normal] [Started] [Pod] Started container web

5AKT will be locked to make sure the service provider will be paid and avoid a payment on every block. This locked amount is released when you close the contract.

Persistent storage

A persistent storage can be declared and attached to a container, but this persistent storage will only be located on the server where the lease has been contracted. It means that the storage will be cleared at the en of the lease… and also if your provider is shutting down.

version: "2.0"
services:
  xxx:
    image: ...
    expose:...
    params: ...
      storage:
        data:
          mount: /var/lib/xxx
profiles:
  compute:
    xxx-profile:
      resources:
        cpu:
          units: 1
        memory:
          size: 1Gi
        storage:
          - size: 512Mi
          - name: data
            size: 1Gi
            attributes:
              persistent: true
              class: beta2

Limit of Akash

Even if Akash is the most advanced option I found on the web3 to deploy a custom workload, it has the limitation of a service where you have no real reliability. It’s not a problem if you design your solution to support and host disconnection and persistence storage deletion. The price of the monthly renting can be really low and it’s a way to get a VM anonymously. The over architecture you need to deploy to make sure of the reliability of your service can directly impact the overall cost compared to a VPS from a cloud service. But once again, it’s for me the solution that gives the best results in the shortest time.

If you select a big provider with a large number of CPU, you can expect it has multiple pods and will be more reliable. In my point of view there is a lack of information about the provider architecture when you make your lease and this would be interesting to know.

Large provider does not propose better price than low cost traditional providers like OVH, it make sense because Akash architecture is complex and can’t be launched on spare hardware, cost of dedicated architecture may be close to public standard offering.

Providing computing power to Akash

Akash is based on Kubernetes and is not a big K8S infrastructure but a federation of CaaS provider. It means you need to deploy your local K8S and provide multiple pods if you want to offer some reliability.

The documentation seems to be good and the installation itself not a really complex deal if you have some good Linux admin skills.

You can’t really use Akash for sharing your spare processing time with other as the architecture is complex and should involve multiple servers at a time.

For this reason I did not start to build a such infrastructure: complex work and a limited return on invest.

On top of this comes the question of the liability, as explained in my previous blog post about decentralized infrastructure, the question of sharing an IP with an anonymous person can be a risk.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.