You may have noticed that for a couple of months I’ve been going down this cryptocurrency rabbit hole. I was trying to wrap my mind around this whole ICO craze and explain it in plan English to someone else; looked at the implications of tokenization and blockchain technology in general for different industries (some are very promising indeed!).
My latest discovery, however, was a quite surprising one. Did you know that blockchain can be used to build the world’s largest supercomputer? In fact, some folks have already made significant attempts in that direction.
Before we go on and talk about the supercomputer thing, I’d like to quickly touch down on another important term – distributed computing.
The idea of distributed computing isn’t that novel per se.
Some of you may be familiar with the notion of Ethernet – the “proto Internet” or a local-area type of networks invented back in the 70s. A number of computers were connected together to form a local network, where various type of information could be exchanged. The more advanced forms of Ethernet still exist today and are used to internally exchange certain data in a secure fashion.
Distributed computing in general, assumes, that you are using a system of networked computers, which can coordinate their actions, to pursue one common goal – e.g. simultaneously process a massive pile of incoming data. An example of distributed computing is your favorite MMOG game if that’s your thing, or a torrent website/network – a legal one of course ☺.
The true appeal of distributed computing lies in the fact that you don’t need to buy mo’ or better hardware to do some top-notch operation. You can “borrow” resources from someone else whenever you need to perform a complicated operation, for instance, 3D rendering.
Because here’s the thing - lack of computing power is slowing down the scientific progress. Securing sufficient resources isn’t always that easy or affordable. In most cases you need to go through the route of selecting and signing up with a cloud services provider such as AWS, waiting for their approvals and so on. Or what’s even more dragging, try to get computing power straight from some commercial data center. This process certainly isn’t instant or efficient.
Next, there’s the question of speed – machines situated further from one another will work at a slower pace. Developers, especially in the IoT domain, now comment that the current cloud-based models for sharing resources have a significant lag. The data generating devices (e.g. your smartwatch) is not physically located close enough to the data center. So the longer the transmitted data “travels”, the fewer time businesses have for processing it and returning it back to you in form of some cool insight or feature.
The quick answer would be – a lot actually. But I’ll allow myself to go into further details.
Apart from the speed problem, there’s also this issue of properly managing the relationships between different parties involved in the distributed computing, especially if you choose to “rent” power not from a professional data center, but a smaller entity – say a bitcoin miner or a bunch of different sources simultaneously.
Tracking how the work is performed; whether all points of the agreement are kept and proposing the right price for the job so that the power provider knew that running this computation would be worth his time is extremely hard in that case.
But blockchain can fix all those issues, specifically thanks to smart contracts. The benefits here are as following:
- “Proof of work” is instantly visible and securely recorded in the chain. You can see exactly who did what.
- The peer-to-peer nature of blockchain means that resources could be rented from any location and the computation can occur close to where the data is being generated.
- Anyone having some idle hardware – laptop, GPU or even a smartphone – can rent it to someone in need of additional power and earn a side income.
- Blockchain makes this entire process of renting just enough computational time for an appropriate sum decentralized, thus eliminating an intermediary who sets the price and dictates the terms.
With this scenario in place, we can create a world supercomputer, interlinking hundreds of thousands of computers, to perform large-scale applications. Oh wait, I think there is one already and it’s called the Golem Project.
Founders of the Golem Project want to entice users from all over the world to sign up and rent their computing power whenever it’s not in use. You can “donate” as little as your laptop processor, or as much as a few servers from your data center.
Their ultimate goal, however, is to create a truly decentralized computing environment, where no task is too large or complex to be processed. They want to unite all computers running the Golem App and unify them within a peer-to-peer network. In that case, anyone on board can send a computing task to all the network’s participants and have it solved in a few hours – and they also get paid for that in the platform’s tokens.
Altumea is a similar platform hot off the press. Unlike Golem, it’s more niche and invites GPU owners (the gamers, the miners, the artists) to “rent” their hardware to other folks in need of additional power – the scientists, the engineers and the IoT developers. The platform’s also blockchain based and the participants receive payouts in the platform’s currency, which could be later exchanged to the crypto of your choice.
These “Airbnb for Computers” have a significant advantage over the traditional cloud computing as it makes the analytics happen closer to where it’s being generated; eliminates a centralized point of failure (e.g. when something went wrong at the provider’s end) and makes access to computational power more affordable and streamlined – no lengthy onboarding procedures and so on.
As the demand for computing power will continue to increase, I believe we will see more similar P2P platforms emerging, offering different types of computing resources for rent.