Categories: Cloud
Introduction
Cloud computing and cloud-based services are the buzzwords of this decade. So what exactly is a cloud?
Unfortunately, the term “cloud” is not very well defined. It usually refers to one of three things:
- Infrastructure as a Service (IaaS),
- Platform as a Service (PaaS), or
- Software as a Service (SaaS).
Software as a Service (hosted apps you can log into) is boring - this article looks at the other two.
A Minimal IaaS Cloud
First, you take a large room and fill it with racks of servers, routers and storage devices. Ideally, add redundant power supplies, high-speed optical networking, and other goodies. However what you then have is simply a datacenter. To make these resources an IaaS “cloud”, you need to add a (reasonably) simple REST API for managing these servers. The API should provide endpoints which:
- list/create/update/delete network subnets and firewall rules
- list/create/update/delete network loadbalancers
- list/create/update/delete virtual disks
- list/create/update/delete/start/stop virtual machines
The API for creating a virtual machine must take as parameters at least:
- either num-cpus + amount-of-ram + similar settings, or a “machine type” which is a predefined set of values for those settings
- the network(s) to which the machine has access
- and the storage location of the virtual machine image to be booted on that machine
Usually, as well as providing an API for managing resources in the cloud, the cloud-management software also provides a web-based user interface for performing the same tasks. This combination of API and interactive interface allows users to get started quick, and to discover functionality via the API and then to move on to more advanced usage such as using scripting and tools to manage resources. In particular, an API allows systems to automatically respond to load by triggering the creation of additional resources (“scaling”).
Cloud-management software usually also provides some kind of account-management and user-management functionality, so that different users/groups can share the resources of the cloud without getting in each other’s way - and perhaps also so they can be billed for the resources they use. Of course user and account management requires some kind of authentication and authorization system.
Commonly, a “cloud provider” which offers the minimal IaaS API described above also offers many additional useful APIs. Some kind of “shared filesystem” service is usually provided, and things like “hosted kubernetes”, “serverless functions”, hosted nosql databases, and other more advanced components are also often included.
Note: the above description specifies a REST API as the interface; of course it is possible to create a cloud api using some other kind of communication protocol. The Google cloud, for example, provides access to services over both REST (http + json) and grpc (http + protobufs).
IaaS Cloud Implementations
Cloud-management software as described above has been implemented independently several times. A company which offers a cloud API backed by a datacenter is sometimes called a “cloud provider”.
Amazon have invented their own cloud API which does the above (and more), created an implementation of the API, and use it to make resources from their (many) datacenters available to paying customers. Their “core” cloud system is called Amazon Elastic Compute Cloud (aka EC2), and their extended system is called Amazon Web Services (AWS).
Google have done the same - invented their own cloud API which has the same basic functionality as Amazon’s cloud, but of course incompatible with it. Their system is called Google Compute Platform (aka GCP).
Microsoft have also invented a cloud API with proprietary implementation, and call their cloud service (api plus datacenters) Azure.
Alibaba Cloud is not yet a big name in the western world, but as the Amazon of China it is possibly something for developers in Asia to consider, and may also be a significant player in the world market in the near future.
The OpenStack project is an open-source cloud API and corresponding implementation with many contributors, the most significant being RedHat and NASA. It is used widely within company datacenters to provide cloud services for company-internal projects.
The Apache CloudStack project is an alternate open-source API and implementation.
Digital Ocean has its own cloud API, proprietary implementation, and datacenters. The service offering is less extensive than that of Amazon, Google, or Microsoft but easier to understand and somewhat cheaper.
Exoscale uses a modified version of the Apache Cloudstack infrastructure to provide cloud services over its europe-only datacenters. Like DigitalOcean, its offering is simpler but less expensive than the main players.
Rackspace is primarily a consulting company with expertise in Amazon, Google and Microsoft clouds. However they also run their own OpenStack-based cloud (and in fact were one of the inventors of OpenStack).
OnApp is a company with its own cloud API and proprietary implementation. Unlike Amazon/Google/Microsoft, OnApp does not run datacenters itself, but instead licences its software to other companies with datacenters which they wish to use as a cloud - either for internal use, or to offer to customers.
Note: this is not a complete list!
According to Gartner in 2016, marketshare was:
- AWS: 44.2%
- Azure: 7.1%
- Alicloud: 3.0%
- GCP: 2.3%
- Rackspace: 2.2%
- Other: 41.2%
According to Skyhigh, marketshare by revenue was:
- AWS: 47.5%
- Azure: 10.0%
- GCP: 3.95%
- IBM: 2.77%
- Other: 36.2%
while marketshare by workload was:
- AWS: 41.5%
- Azure: 29.4%
- GCP: 3%
- IBM: 2.6%
- Rackspace: 2.9%
- Other: 20.7%
How does an IaaS implementation work?
How can an API be used to configure networks or allocate virtual machines, ie what happens when such an API is called?
The details are rather complicated - that’s why projects like OpenStack have been developing code for years. However at least a rough outline is possible in a few paragraphs.
Allocating VMs
Each server in the datacenter which is intended to be used as a “virtual machine host” needs some kind of base operating system which acts as a hypervisor. Hypervisors are generally divided into two categories:
-
those with an extremely small core of privileged code which coordinates between the hosted virtual machines, where one of those virtual machines is a special “manager” instance that runs management programs that accept network commands and pass them on to the hypervisor itself; Xen, VMWare ESXi and Hyper-V are the best known of that type.
-
those which are a reasonably normal operating system (though trimmed of unnecessary software) that both manage virtual machines and host the management software directly; Linux KVM is the best known hypervisor of this type.
In addition, server-class computers come with embedded firmware that can be used to order them to turn on the primary processing components and boot from an image hosted on a server with a specified IP-address.
Sometimes servers to be used as virtual machine hosts have the necessary hypervisor installed on a disk attached to the system, and sometimes the embedded firmware is used to make servers boot from a suitable image on a network server.
When a cluster management server receives a suitably-authorized request to start (boot) a new VM, it must perform a “best fit” search to determine which physical host in the datacenter is most appropriate for the new VM. It must then communicate with the management software on that host to tell it to allocate a new VM with appropriate settings and the appropriate boot-image. Some minimal pre-boot or post-boot modification of the server might also be necessary; one common technology for such boot-time config is cloud init.
Boot-time config may be required to mount network filesystems within the booted system as specified in the VM configuration. The primary network address of the virtual machine is generally allocated via DHCP, but some post-boot setup may also be needed in order to define additional network interfaces for the VM instance.
Installing of software and other significant reconfiguration of the booted system is not the responsibility of the cloud management software - either the image to boot should be suitably prepared, or the booted image should contain pre-installed software which connects to a configuration-server such as Puppet to obtain its full desired configuration.
When a start-VM request is received, the cluster management server must also generally communicate with network-related systems to ensure that DHCP works as expected, and that the new VM instance is able to communicate with other servers that are on its subnet. In traditional networking, subnets are often quite physical - servers which are logically related are also physically co-located. In a cloud-based datacenter this is usually not the case, and servers which logically belong together can be physically separated - network routing must therefore be more flexible.
Allocating Storage
VMs can request “local storage” and are simply given access to disks attached to their host (which assumes that the vm-to-host allocation algorithm must ensure a vm is only started on a host with sufficient free disk). However such storage does not “survive” if a VM is migrated to another host.
More commonly, an API is available to define a logical volume of a specific size. This API communicates with the storage system to define the logical volume name and reserve storage blocks 1. The booted VM then mounts the logical volume as “remote block storage” - to the VM user-space it appears almost identical to a local disk of the same size, except that each “read” or “write” operation does not use a local protocol like SCSI to communicate with a nearby disk but instead sends a network packet to communicate with one of the datacenter storage servers.
Optimisation, Monitoring and Billing
The cloud management software is responsible for monitoring the datacenter, giving the center admins information about hardware problems. Virtual machines may be moved from host to host to reduce costs or to free up a machine for maintenance; sometimes this can even be done without needing to stop the virtual machine operating system.
And of course the cloud management software is responsible for tracking who is using how many resources, for the purposes of implementing quotas or generating bills.
Additional IaaS Services
What other services do cloud providers such as Amazon or Google offer in addition to the basic IaaS functionality?
Commonly, an authentication infrastructure is provided, ie a scalable database of (accountid, credentials) for recording information about users and systems. APIs and UIs are provided for creating/managing/deleting accounts, and for “logging in” to get some kind of “authentication token” that can be used in calls to other APIs offered by the cloud provider.
An authorization infrastructure usually goes together with the authentication, ie a way to assign permissions to specific accounts.
Some kind of project-based or account-based isolation is usually available, so that resources (such as storage described below) are accessible only to virtual machines defined within the same project or account. This avoids conflict between groups, and security issues.
Billing/accounting is typically provided, in order to track who is using how much of cloud resources and charge for it if appropriate.
Network-mounted filesystems are commonly provided, ie fileservers which virtual machines can mount in order to share data with other systems, via protocols such as NFS or SMB.
Network-based block storage is commonly provided, aka “logical volumes”, separating virtual machines and their storage. Virtual machines live on a host system, and if they simply store data on the disk(s) physically attached to their host then (a) the amount of storage is limited, (b) if the host crashes then the data is lost, and (c) the virtual machine cannot be moved to another host. When the virtual machine instead can treat a range of disk-blocks provided over the network as if they were disk-blocks on a local system then these issues go away. of course IO performance does then depend heavily on the performance of the local network - but datacenters such as those provided by Google and Amazon typically have fiber-optic cable to each rack. Note that block-based storage can only be mounted by one virtual machine at a time, ie it is not shared storage. It is, however, far faster than using a network-mounted filesystem via NFS/SMB/etc, and the lifetime of the volume contents is not tied to a specific VM or host.
Object storage is a kind of shared filesystem which is optimised for the cloud, providing rapid access and very high amounts of storage. Existing shared-filesystem protocols such as NFS and SMB were designed for centralized file storage, not distributed storage on the scale that clouds provide; object stores therefore cannot be treated like traditional remote filesystems. Typically an object-store is accessed via the HTTP protocol, using REST operations to “list files”, “write to file at offset” and “fetch from file at offset”. The term “storage bucket” is often used to represent the object-store equivalent of a single remote filesystem, ie a tree of directories containing files. Interestingly, Google Cloud Platform (GCP) can be configured to use a storage bucket as a website - ie a website consisting only of static files can be served directly from an object store without needing an (explicit) webserver.
An API for providing database-type storage is often offered. Of course given basic functionality such as the ability to create VMs, networks, and large amounts of block file storage, it is possible for a cloud user to install and run their own database servers. However this is a significant amount of work; using a cloud-provided alternative can be appealing. More significantly, a database service is usually cheaper than running databases on dedicated VMs because the cloud provider is hosting the service for many customers and can consolidate costs. And often most significantly of all, the database service can usually scale automatically from few requests and small data amounts to very large numbers of requests and large data amounts without needing additional configuration on the part of the user. To be able to scale well, such database services are often not traditional SQL-style systems but instead NoSQL systems with different constraints (particularly around transactions and joins) that allow data to be spread efficiently across a cluster.
Cloud environments often offer an API for sending asynchronous messages, ie a “message broker”.
They also often provide an API to manage Domain Name Service (DNS) data. Sometimes this is purely for internal use, ie the API can be used to register names that can point to virtual machines within the cloud environment but which are not usable outside that environment. However sometimes they also provide an API to manage public DNS records for domains.
Sometimes the cloud provider includes Content Delivery Network features to improve the delivery of static resources to external users, eg html/javascript/css/icons related to applications running within the cloud environment.
And some IaaS cloud environments offer the ability to run/manage Docker containers (or equivalent) as well as pure virtual machines, via the Kubernetes container orchestration system (or equivalent). Given the ability to create virtual machines, a cloud user could install the necessary software (Docker, Kubernetes, etc) themselves - but a cloud provider can often provide that service cheaper, more reliably, and more scalable, making such features appealing to cloud users.
Finally, cloud environments may offer Platform as a Service (Paas) features too; see the next section of this article.
This list of services provided by Amazon AWS and this list of services provided by Google GCP shows the kind of things a user of those cloud environments can take advantage of. In addition, cloud providers often partner with other companies to offer additional APIs that code running in a “cloud” can use - for an appropriate licence fee.
Security
A VM is the owner’s responsibility; the cloud provider (hopefully) applies security patches to their hypervisors, but applying patches to (and rebooting) VM instances is not their responsibility.
Configuring IaaS
As noted earlier, an IaaS cloud generally provides both an API and an interactive interface for manipulating resources within the cloud. However configuring complex sets of VMs and networks by hand is slow, error-prone, and not repeatable. There are several good tools to allow such resources to be defined declaratively and then to “apply” the definition to a cloud to create (or destroy) the necessary resources. See this article on Terraform for an example.
Platform as a Service
PaaS is a “higher level” abstraction than IaaS - still an automatable collection of resources, but at an application-runtime level rather than at virtual-machine/network-router level.
While the ability to configure a VM allows a cloud user to achieve almost anything, it does not necessarily make it easy or cheap. Scaling the system in a cost-effective manner is particularly tricky; when a system is running software on N virtual machines and the load suddenly increases, starting further machines to handle the load is not an instantaneous process, and not trivial to implement. Scaling down can also be an issue - particularly for development environments or intranet environments, it is nice when the number of active machines (ie resources being paid for) can drop to a minimum (ideally zero) when no load is present.
Like IaaS, a PaaS provider often also provides external services such as databases and file-storage.
The PaaS environment I personally know best is Google AppEngine, so that will briefly be described here - but there are many systems that work similarly.
Google AppEngine
AppEngine is a service that allows “applications” to be deployed - custom code without needing to deal with VM-level details. Such applications must obey a strict set of limitations and be packaged in a suitable way; in return the PaaS environment can scale them (start multiple instances) and handle request routing (loadbalancing) and similar complex issues automatically.
AppEngine supports Java Web Application Archives (aka war-files), and similarly packaged apps written in Python, PHP, node.js, Go, Ruby, and dot-net.
For Java webapps, the developer simply creates a standard-format war-file (following all the rules for a standard webapp) and AppEngine (standard mode) provides the servlet-engine environment, the Java runtime environment, and the underlying operating system. Unlike regular webapps, the Java code cannot start threads, cannot write to local disk, and must complete each http-request within 60 seconds. In return, AppEngine will monitor the amount of time requests are taking and when a threshold is reached it will simply start new instances of the application - as many as are needed to handle the load, even if it is thousands. Such instances start in just a couple of seconds, and are automatically registered with load-balancers to distribute requests across them. When load drops, unneeded instances are stopped - all the way down to zero instances when no load is present (and thus no fees), which is great for development-environments or intranet-like apps which are only used during business hours. When a request is eventually received and zero instances are running, then one is simply started. In addition, the per-hour costs for running each AppEngine instance are very low - significantly lower than the price of a single VM in the same cloud.
Google doesn’t explicitly say how AppEngine (standard mode) instances are deployed on the Google infrastructure, but from looking at the logs it is clear that each AppEngine instance is actually running within a Linux container; the “deploy appengine” step causes Google to generate a container image internally.
Google doesn’t document exactly how they can afford to charge such low prices for AppEngine standard-mode apps, or how they can be started so quickly on demand. My personal speculation is that for Java-based AppEngine deployments they once start a container which runs a VM, Java instance, and Jetty servlet engine with zero webapps, and then snapshot the container running state. To start a new Java AppEngine standard-mode instance they then simply “clone” that snapshotted image and mount the webapp war-file into the image. Jetty must then execute its standard war-deploy logic, but no other per-app startup logic is needed, making startup fast. By “cloning” existing images and taking advantage of operating-system-level copy-on-write memory pages they can also share a significant amount of both disk and memory between multiple AppEngine processes running on the same physical host - and thus pack more AppEngine instances into a single physical host than would be normally possible. This in turn would reduce per-instance costs. Presumably other supported languages (eg PHP) can be supported similarly.
Normally, container images from independent customers should not be run on the same host system, as inter-container security is not great - there are multiple ways for one container to interfere with others that happen to be on the same host. However in this case, the restrictions on AppEngine applications (eg no native libraries for Java-based apps, and no local files) reduce these risks. I suspect that Google takes advadvantage of this and packs containers for different customers onto the same host, allowing it to reduce costs further.
As noted, Google AppEngine also supports applications written in other languages and provides a comparable “execution environment” for those apps. Additional languages/runtimes may be implemented in future.
Java AppEngine instances are very limited in the amount of memory - a maximum of 1GB is available for the largest environment. This does make AppEngine standard-mode unsuitable for heavy processing; the environment is best used for receiving requests, reading/writing databases or communicating with other back-end systems, and then either returning JSON or returning HTML built by combining a template with the data fetched from a database etc. That still covers a lot of use-cases.
What is nice about using the Google environment is that both IaaS and PaaS are available. In particular, front-ends can be build with AppEngine standard-mode while back-ends can be run on farms of virtual machines or docker-servers in the same cloud. Note however that an AppEngine front-end (with all the scalability benefits) can talk to dedicated back-ends in a company-specific datacenter too, ie hybrid solutions are definitely possible.
Other PaaS Providers
One of the oldest PaaS providers is Heroku, which supports application development in several languages.
Cloud Foundry is a complete app-management environment; applications may be built in any form for which a “build template” exists, eg Java webapps. Cloud Foundry is open-source, with many commercial providers offering hosting based on the product.
Amazon Elastic Beanstalk is the AWS PaaS solution, roughly equivalent to Google AppEngine.
Container-based PaaS
Running code in Linux containers has experienced a huge boom over the last few years. A cloud which supports deployment of container images sits somewhere between IaaS and PaaS, depending on your viewpoint. It is unlikely to scale quite as quickly as traditional PaaS environments, as the cloud environment has less control/knowledge of what is occurring within the container. However it gives the developer far more control over what tools or languages they use and how their application is packaged.
Each cloud provider seems to have invented their own API for deploying and executing containers; Amazon has ECS (two variants), Azure has Service Fabric, etc. Google’s solution is Kubernetes, and fortunately it appears that Kubernetes is slowly becoming a kind of standard with multiple cloud providers now offering a “hosted Kubernetes” service (Google Kubernetes, Amazon EKS, Azure AKS, etc).
Most cloud providers also offer a more managed application deployment environment which is based on containers but offers fewer config options in return for easier configuration and more scalable deployment. Google “AppEngine Flexible mode” is a cross between Docker and its traditional AppEngine standard mode - apps are packaged as a container but “orchestrated” using the AppEngine infrastructure rather than Kubernetes. Amazon Fargate is somewhat similar, as is the Azure App Service.
Serverless Computing
PaaS is intended to take away the pain of managing a runtime environment for applications (at the price of some flexibility). The most extreme kind of PaaS is the very recent availability of “serverless” environments, in which developers write individual handler functions and deploy them into a cloud environment. A handler function is typically either an HTTP handler (ie processes HTTP requests) or a message-handler (processes messages on a message-queue).
Amazon’s implementation of this concept is called AWS Lambda and Google’s implementation is called Cloud Functions.
There are a lot of things that such functions are not suitable for - but development can be very rapid, price low, and scalability truly enormous.
Summary
The options for deploying code - VM-based, container-based, PaaS-based, serverless - provide a range of options from most-flexible/least-scalable/highest-admin to least-flexible/most-scalable/lowest-admin. In short, something for everyone.
Further Reading
Here are some more resources that may be of interest:
Footnotes
-
Often fewer blocks are initially reserved for a logical volume than requested (“thin provisioning”) on the assumption that overall the amount of storage used by all VMs will be significantly less than the amount they request. Of course, if every VM suddenly tried to fill every volume attached to it then there would be problems - but that would require the majority of users of the cloud to act simultaneously which is not likely. ↩