Docker is an open-source tool that helps developers create, manage, and deploy containers. These containers can be run locally or easily deployed to the cloud with great support available in AWS, Azure, and DigitalOcean. In addition to the open-source tools, Docker is also a company that provides hosted services for running and managing containers.
Before discussing why Docker is so useful, it’s worth taking a step back and making sure that we understand the concept of a container. Containers are basically a way to isolate processes such that everything running in the container have their own unique view of various resources (i.e., list of processes, memory, filesystems). That means that if a process is running in one container and writes a file, a process running in other another container won’t see those changes.
So how is this different from a virtual machine (VM)? While both containers and VMs isolate resources, VMs accomplish this by emulating the entire hardware layer. While hardware virtualization is relatively efficient these days, you still get the additional overhead of running a separate operating system environment. Containers, on the other hand, rely on an existing OS kernel for isolation.
Although the technical differences between VMs and containers may not seem like a big deal, containers have many real-world benefits. For example,
- VMs take tens of seconds to start, while a container will often take less than a second.
- Containers are much more efficient in terms of resource utilization.
- Containers are usually single-purpose, making it easier to secure and deploy your application.
Prior to Docker, working with containers was very cumbersome, and involved using a patchwork of tools (cgroups, lxc-containers, etc.). Docker greatly simplified the process of creating, running, and managing containers by combining these utilities into an easy to use API and command-line interface.
While resource isolation isn’t particularly new, the ability to quickly deploy low-overhead containers has opened possibilities for some interesting use cases.
If you’re using a popular Linux distro (i.e., Ubuntu, Redhat/CentOS) Docker provides some scripts that will do most of the heavy lifting. Just type the following into your terminal. You’ll need to have sudo privileges, so you may want to prepend sudo to all the commands.
$ wget -qO- https://get.docker.com/ | sh
A common use case is to run a database in a Docker container. Let’s look for the MongoDB image and install that. First, search for “mongodb” in the Kitematic search bar. Then click the install button. This will automatically download and start the image.
Once the image has started, you’ll see it on the left-hand pane. If you click on that, it should give you information about how to connect to that instance under the Ports section.
If you’re not using Kitematic, you can still download and run the MongoDB image manually. Just type the following into your terminal:
$ docker pull mongo $ docker run --name my-mongo-instance -d mongo $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1b978f09287d mongo:latest "/entrypoint.sh 2 seconds ago Up 1 seconds 0.0.0.0:32768->27017/tcp my-mongo-instance
The connection information is located under the PORTS column. In this case, you should be able to connect to the MongoDB instance via localhost:32768.
You should feel confident enough to start downloading and installing additional Docker images. Some popular ones include Redis, Wordpress, and NodeJS. The real fun though will start once you start creating your own Docker images.
In Part 2 of this series, we’ll explore how to create your own Docker images using Dockerfiles. We’ll also cover how to upload and share these images to Docker Hub.