Quantcast
Channel: software engineering – Avantica Technologies Blog
Viewing all articles
Browse latest Browse all 28

Containerization

$
0
0

Docker Marquis

When they went mainstream a few years back, virtual machines brought about the first cloud revolution. Today, most cloud-based infrastructure is founded on hypervisors, classic IaaS (Infrastructure-as-a-Service) architectures like Amazon Web Services and Microsoft Azure that are made up of virtual machines free from specific physical servers. This has enabled an entire ecosystem to efficiently manage data centers by provisioning these VMs and automating the deployment processes, allowing systems to easily scale as necessary.

Containers on the other hand, are creating a new way to think about cloud infrastructure. They deliver same benefits as Virtual Machines but at a lower cost and overhead.

What is Containerization anyway?

A simple explanation of containerization is that it provides lightweight virtualization with almost zero overhead, or a lightweight alternative to full machine virtualization. Containerization encapsulates an application in a “container” with its own runtime environment; a container consists of only the resources it needs to run the application it is hosting, which results in a more efficient use of the underlying resources.

The containerization revolution touches all IT areas within an organization, from how applications are conceived and designed to how they are deployed.

A containerized ecosystem should be thought of as a microservices-based architecture, which is a fundamental characteristic of cloud based applications. In such architecture, each microservice (i.e. web server, database, application server, queue, etc) runs in its own container.

Also, since a containerized application is “portable” and “lightweight,” it can be shipped faster and deployed anywhere, unchanged. This allows it to have standardized environments across all the steps in the software development lifecycle: development, QA, staging and production.

Containerization vs Virtualization

To put things in perspective, virtual machines run a full operating system with its own memory management installed with the associated overhead of virtual device drivers. In a virtual machine, valuable resources are emulated for the guest operating system and hypervisor, which makes it possible to run many instances of one or more operating systems in parallel on a single machine (or host). Every guest Operating System runs as an individual entity from the host system.

Containerization, on the other hand, eliminates all of the “baggage” of virtualization by getting rid of the hypervisor and its full-blown virtual machines. Each application is deployed in its own container that runs directly on the host operating system; that is, containers only use one single and shared instance of the operating system. One way to think of it is as a form of multi-tenancy at the operating system level.

This lightweight approach to application encapsulation means that containers can be provisioned and deprovisioned in seconds rather than in minutes; enabling cloud based applications to scale rapidly since applications can launch on demand as requests come in, resulting in zero idle memory and CPU overhead.

Containers and Virtual Machines have different strengths and weaknesses, and therefore they should be viewed as complimentary tools. For example, containers are especially good for early development because the speed of manual provisioning and deprovisioning greatly oversights the manageability of a virtual machine in an environment where everything is new and rapidly changing.

Future

Even though containers are not new (Unix has had them for years) it was not until Docker popularized the technology that it gained a lot of attention. Almost all major IT vendors and cloud providers are offering container-based solutions, and there are also several start-ups that are bringing new ideas and tools to the ecosystem. With such a rapidly growing ecosystem of providers and offerings, the promise of “application portability” needs to be kept, and in order to do so an industry standard needs to be created.

While the Docker project has served to make the Docker image format the de facto industry standard, a global project has been created and is supported by most major companies in order to create that industry specification: the Open Container Project.

There is still a lot going on in the container industry (the standard first needs to be defined), but vendors will continue to push the revolution forward. Embracing the technology is not a question of why but rather of when and how, and those who use containers will definitely gain a competitive advantage over the ones that don’t embrace the technology fast enough.

By Rodrigo Vargas.


Viewing all articles
Browse latest Browse all 28

Trending Articles