In this post I describe the concept of polymorphism in Java – one of the cornerstone features of the object oriented approach.
This overview is based on my IP Switching and Routing lectures at the San Jose City College.
In the paper “Above the Clouds: A Berkeley View of Cloud Computing,” cloud computing is described by three aspects: the illusion by cloud user of infinite computing resources on demand; the elimination of upfront cost commitments by the users; and the ability to pay for computing resources on a short-term basis.
A cloud evangelist Ben Kepes in his article “Want an Irrefutable Example of the Value of Cloud? Here Goes” gives an example that demonstrates the potential behind the “illusion of infinite computing”:
One of CycleComputing’s science clients was running a massive scale run against a cancer problem – something to do with simulating the effects of different compounds on a protein associated with cancer. The run was estimated to take 341700 hours (39 years). CycleComputing built a utility supercomputer with some 10600 cloud instances, each of which were an individual multi-core machine – apparently this is the largest cloud HPC (High Performance Computing) environment ever built. If it had been built physically it would have required 12000sq feet of data center space and cost $44M. Instead, over a two hour build time, and a nine hour run time, the total cost of the job run was $4362. 39 compute years, spun up in only a couple of hours, and completely run in half a day. Compelling story huh?
As this story demonstrates, the prospects of what cloud’s scaling will be able to do for solving humanity’s tough problems are truly promising.
The feature most responsible for this illusion of infinite computing is the cloud’s ability to elastically scale. In this post we take a deeper look at how cloud scales.
There are two types of scaling:
- horizontal and
Horizontal scaling occurs by linking together identical virtual machines that appear as one bigger VM to the user. Horizontal scaling is referred to as “scaling out” when a VM is added. And when a VM is released – “scaling in.”
In the above picture a customer starts with one VM. When that customer’s demand jumps significantly (1), the two new VMs are cloned (2) and are linked in with the first one (3) to create an illusion of one VM that is three times as powerful as the original one(4).
Horizontal scaling is done on the fly without any interruption of service to the customers and is entirely transparent to them.
Unlike with horizontal scaling, vertical scaling replaces an existing VM with either larger or smaller VM. It is referred to as scaling up or down.
In the picture above a customer starts with one VM that has one CPU. When the demand grows (1), a new VM with two CPUs is created and booted up (2). Customer then switches to the new scaled up VM (3).
Vertical scaling involves interruption in customer service because it requires reconfiguring and rebooting the VM. For this reason horizontal scaling is a more common form of scaling in cloud environments.
In horizontal scaling the two software components that facilitate scaling are Automated Scaling Listener and a Load Balancer. Suppose a user has a website with one VM behind an Automated Scaling Listener and a Load Balancer (See picture below).
If the load suddenly grows, an Automated Scaling Listener detects it (1). It issues API calls to create a second and third VM cloned from the original one (2) (3).
New VMs are added to the Load Balancer (4). Now there are three VMs behind the load balancer and they can process the increased demand (5). Later, after the load has decreased for some period of time, the Automated Scaling Listener issues a delete VM API call which removes a VM from the load balancer, powers it down and deletes it.
Such dynamic horizontal scaling could scale out to dozens or even hundreds of VMs.
A Service Provider can offer cloud services to the customers in three different ways: as infrastructure – infrastructure as service (IaaS), as a platform – platform as a service (PaaS), or as an application – software as a service (SaaS). Before we decipher each of these acronyms, let’s quickly review virtualization.
As was described in the previous post, a Cloud Provider subdivides their physical computers into virtual computers or virtual machines (VMs) using the virtualization technology. These virtual machines are made out of software but appear in their functioning like real physical computers to a user who can access them over Internet. A virtual machine gets allotted only a portion of the computing resources of the underlying physical computer.
The virtualization software that subdivides the physical computer into virtual machines is called hypervisor and it is the first layer of software that gets loaded directly onto the hardware. In a picture below three virtual machines were created out of the underlying physical computer.
IaaS stands for the Infrastructure as a Service. This fancy term means simply that a customer can lease virtual machine(s) from a service provider.
Customer can then install an operating system on a virtual machine and then develop and deploy their application(s) on top of the operating system. Customer is responsible for the maintenance of the operating system and applying patches to it.
Scaling of IaaS. When Cloud management software detects that customer’s usage of VM had reached a threshold, it will auto-scale that VM resulting in doubling of computing capacity. The alternative to auto-scaling is to notify the customer that the threshold have been reached and this way the customer can choose to adjust the resource allocation.
PaaS stands for Platform as a Service. With PaaS, the cloud owner provides to a customer a VM, an Operating System and a middleware (i.e. database, web server, programming language execution environment, software tools, etc.) Customer can then develop and run their own applications on this pre-packaged platform. With PaaS customer does not need to worry about maintaining the OS and the middleware.
Scaling of PaaS. The underlying VM and storage resources scale out or in automatically to match customer’s application demand. Alternatively, customer may choose to be notified when a usage threshold have been reached so that customer could choose to adjust the resource allocation.
SaaS (Software as a Service) is really an unfortunate term. It would have been more appropriate to name it AaaS – Applications as a Service.
With SaaS Cloud owners install applications in the cloud for the customers to lease and use. These apps are typically ready made business solutions. Examples of such apps are Google Apps, customer relationship management (CRM), HR and payroll, etc.
Scaling of SaaS. With a SaaS whenever a customer’s demand goes up, the tasks within the app are distributed at a run-time onto multiple virtual machines to meet that demand. This is completely transparent to the customer.
There are numerous specialized forms of SaaS:
BPaaS (business process as a service),
TaaS (testing as a service),
CaaS (communication as a service),
DaaS (data as a service), and
Desktop as a service
BPaaS. In the past businesses have programmed their business process flows into their applications. If, for example, a company’s business process mandated that before an order can be issued a credit check should be performed, then the company would have this business process step built into their application. Today, many of the business processes have been standardized and instead of hardcoding these standard process steps into a business application a menu of processes that are not tied into a single application and can work with many business applications have been created and offered as BPaaS. Business now can choose which processes steps they need and can lease these from a cloud provider.
TaaS. TaaS is a cloud-based apps for delivering automated application testing services. TaaS is most suitable for specialized testing efforts that don’t require a lot of in-depth knowledge of the design or of the system. For example, services that are well-suited for the TaaS include automated regression testing, performance testing, and security testing.
CaaS. Communication as a Service (CaaS) provides customers with Enterprise level VoIP, VPNs, PBX without the need to purchase, host and manage the infrastructure.
This explanation aims to appeal to your intuitive side. Rather than loosing you in the woods of terms and details my hope is that you will have an aha moment. Once you get the idea of how the bare-bones Cloud works, you will be able to move on and learn about the numerous details and technologies that make up an industry-grade Cloud.
The simplest way to think of the Cloud Computing as a remote computer that you can connect to via Internet and on the pay-per-usage basis do the software development utilizing the development tools on that computer, or run a business from it.
The more computer processing power you need, the more will be provided to you automatically similar to how electricity is provided to you through your wall outlet. You will pay only for what you use.
That is the Cloud Computing at its simplest.
But a perceptive reader may notice that for a Cloud to work as explained something is missing from the explanation. “Wait a minute,” such a reader may exclaim, “I can picture how a consumer can use the development tools on that remote computer or even use a business software to run a business, but how in the world would that computer give more processing power when consumer reaches the limit of that computer’s power? What if that user’s business suddenly picks up and is flooded with orders which would require the power equivalent to several powerful computers to handle – how then will that remote computer accommodate such a situation?”
The answer lies in the two technologies that the computer scientists have been working on since the 1990s: Virtualization and Grid Computing. In couple of minutes you will see how these two technologies when married become what we know today as Cloud Computing.
Virtualization technology allows to divide one physical computer into smaller virtual computers that appear to the user as completely isolated from one another physical computers.
Grid Computing technology, on the other hand, links separate and heterogeneous computers to cooperate and work together to appear as one powerful computer.
Now let’s put these two technologies together to create our basic Cloud. First we would purchase two or three powerful computers. Then we would use the Virtualization technology to divide each of those powerful computers into a number of smaller virtual computers. Having done that, we would start renting out these small virtual computers to customers over Internet and charge them per usage. And when any of our customers at any moment needed more processing power than their small virtual computer could provide, our Cloud management system, using the Grid-based technology, would automatically link an additional small virtual computer to the customer’s original one and create a new virtual computer twice as powerful as what they had a moment ago. And, of course, we would start changing twice as much. Should their demand increase again, we would automatically link an additional small virtual computer resulting in a computer three times as powerful as customer’s original small virtual computer. Later, if that customer needed less computing power, our Cloud management system would automatically unlink one or two of the small virtual computers and the customer would end with less processing power. The heart of our Cloud is now beating.
Both technologies – the virtualization and grid-computing have been combined and implemented into a piece of software called hypervisor. Hypervisor gets loaded directly onto the computer hardware and is capable of subdividing that hardware into virtual machines and then auto-scale out or in.
So in summary, the key idea behind the Cloud is to break up the computing resources into small chunks and then using those chunks to scale out or in, dynamically responding to customer’s demand.
The rest are details. To build a truly industry-grade body for our Cloud we would need to add such pieces as self-provisioning services, very sophisticated security mechanisms, and many other things. I will explore those features in the future posts.