This explanation aims to appeal to your intuitive side. Rather than loosing you in the woods of terms and details my hope is that you will have an aha moment. Once you get the idea of how the bare-bones Cloud works, you will be able to move on and learn about the numerous details and technologies that make up an industry-grade Cloud.
The simplest way to think of the Cloud Computing as a remote computer that you can connect to via Internet and on the pay-per-usage basis do the software development utilizing the development tools on that computer, or run a business from it.
The more computer processing power you need, the more will be provided to you automatically similar to how electricity is provided to you through your wall outlet. You will pay only for what you use.
That is the Cloud Computing at its simplest.
But a perceptive reader may notice that for a Cloud to work as explained something is missing from the explanation. “Wait a minute,” such a reader may exclaim, “I can picture how a consumer can use the development tools on that remote computer or even use a business software to run a business, but how in the world would that computer give more processing power when consumer reaches the limit of that computer’s power? What if that user’s business suddenly picks up and is flooded with orders which would require the power equivalent to several powerful computers to handle – how then will that remote computer accommodate such a situation?”
The answer lies in the two technologies that the computer scientists have been working on since the 1990s: Virtualization and Grid Computing. In couple of minutes you will see how these two technologies when married become what we know today as Cloud Computing.
Virtualization technology allows to divide one physical computer into smaller virtual computers that appear to the user as completely isolated from one another physical computers.
Grid Computing technology, on the other hand, links separate and heterogeneous computers to cooperate and work together to appear as one powerful computer.
Now let’s put these two technologies together to create our basic Cloud. First we would purchase two or three powerful computers. Then we would use the Virtualization technology to divide each of those powerful computers into a number of smaller virtual computers. Having done that, we would start renting out these small virtual computers to customers over Internet and charge them per usage. And when any of our customers at any moment needed more processing power than their small virtual computer could provide, our Cloud management system, using the Grid-based technology, would automatically link an additional small virtual computer to the customer’s original one and create a new virtual computer twice as powerful as what they had a moment ago. And, of course, we would start changing twice as much. Should their demand increase again, we would automatically link an additional small virtual computer resulting in a computer three times as powerful as customer’s original small virtual computer. Later, if that customer needed less computing power, our Cloud management system would automatically unlink one or two of the small virtual computers and the customer would end with less processing power. The heart of our Cloud is now beating.
Both technologies – the virtualization and grid-computing have been combined and implemented into a piece of software called hypervisor. Hypervisor gets loaded directly onto the computer hardware and is capable of subdividing that hardware into virtual machines and then auto-scale out or in.
So in summary, the key idea behind the Cloud is to break up the computing resources into small chunks and then using those chunks to scale out or in, dynamically responding to customer’s demand.
The rest are details. To build a truly industry-grade body for our Cloud we would need to add such pieces as self-provisioning services, very sophisticated security mechanisms, and many other things. I will explore those features in the future posts.