Pradeep Singh | 24th Jun 2017
What is a Container?
To instantiate a Container you need a Container Image. so, first of all, let’s understand what is a Container Image –
A Container Image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, and settings.
The container image is built on a union filesystem, a filesystem built out of layers. Every command in the Dockerfile creates a new layer in the filesystem. All the layers in an image are read-only layers, except the top layer which is a writable container layer. A storage driver handles the details about the way these layers interact with each other. When you start a container (or multiple containers from the same image), Docker only creates the thin writable container layer.
A Container is a runtime instance of a container image; or what the container image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.
The major difference between a container and an image is the top writable layer. All writes to the container that add new or modify existing data are stored in this writable layer. When the container is deleted, the writable layer is also deleted. The underlying image remains unchanged.
As shown in the following diagram, unlike Virtual Machines, containers do not bundle a full operating system – only libraries and settings required to make the software work are needed.
Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.
A Brief History of Containers:
- 1980 – chroot (Change Root) (Details).
- 2000 – Free BSD Jails. It allows administrators to partition a FreeBSD computer system into several independent, smaller systems (jails) with the ability to assign an IP address for each system and configuration (Details).
- 2001 – Linux-VServer. It allows several general purpose Linux Servers (Virtual Machines) on a single computer with a high degree of independence and security (Details).
- 2004 – Solaris Containers. It is an implementation of operating system-level virtualization technology for x86 and SPARC systems (Details).
- 2005 – OpenVZ (Open Virtuozzo). It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or Virtual Environments (VEs). OpenVZ is similar to Solaris Containers (Details).
- 2007 – Control Groups (cgroups, original name – Process Containers). It is a feature used to group processes together and allocate resources to those groups (Details).
- 2008 – Linux Containers (LXC). It is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel (Details).
- 2013 – Docker. It allows developers to create and run application containers quickly. Docker container, unlike a virtual machine, does not require or include a separate operating system (Details).
- 2016 – rkt. Coreos release v1.0 of rkt, an alternative container engine with a focus on security and efficiency (Details).
What is Docker?
- Docker is the world’s leading software container platform. It was originally created by dotCloud (a PaaS provider) in 2010.
- Docker was released in March 2013 as an open source project, using LXC (Linux Containers) as its execution environment. LXC was later replaced with Docker’s own library, libcontainer.
- In 2015, Docker donated the libcontainer project to the OCI. This formed the basis of RunC, which is now the universal container runtime (Docker 1.11 uses RunC).
Docker provides an integrated technology suite that enables developers and IT operations teams to build, ship, and run distributed applications anywhere.
Docker consists of two main components –
- Docker Engine: The actual app running on top of the Operating System.
- Docker Hub: SaaS (Software as a Service) component from Docker, for managing and sharing containers images.
What is Container Registry?
A registry is a collection of repositories, and a repository is a collection of images—sort of like a GitHub repository, except the code is already built. An account on a registry can create many repositories. The “docker” CLI uses Docker’s public registry by default.
What is Container Orchestration?
Container orchestration is what is needed to transition from deploying containers individually on a single host, to deploying complex multi-container apps on many machines. It requires a distributed platform, independent from infrastructure, that stays online through the entire lifetime of your application, surviving hardware failure and software updates.
Following are some of the main Container Orchestration Tools –
- Kubernetes: Descended from a platform (Borg) that Google developed to manage its infrastructure, Kubernetes is well suited for very large cloud environments (Details).
- Swarm: This is Docker’s home-grown orchestration platform. Docker claims it’s faster than the competition (Details).
- Mesos: Hosted by Apache, Mesos is designed for general datacenter management, not just containers (Details).
- Kontena: Relatively new orchestration tool that promises to “maximize developer happiness.” It’s designed for ease-of-use, making it a good option for admins new to the container game (Details).
Very concise and helpful summary of Docker – Thanks!
LikeLike