What is docker? How can I get started?

When getting started with Docker, we first need to understand Docker. Docker is an OS virtualized software platform that allows IT organizations to quickly create, deploy, and run applications in Docker containers, which have all the dependencies within them. The container itself is a very lightweight package with all the instructions and dependencies—such as frameworks, libraries, and bins—within it.

The Docker container can be moved from environment to environment very easily. In a DevOps life cycle, Docker really shines when used for deployment. When you deploy your solution, you want to guarantee that the code tested will actually work in the production environment. In addition, when you’re building and testing the code, it’s beneficial to have a container running the solution at those stages because you can validate your work in the same environment used for production.

You can use Docker throughout multiple stages of your DevOps cycle, but it is especially valuable in the deployment stage, especially since it allows developers to use rapid deployment. In addition, the environment itself is highly portable and was designed with efficiencies that will enable you to run multiple Docker containers in a single environment, unlike traditional virtual machine environments.

How Does Docker Work?

Docker works via a Docker engine that is composed of two key elements: a server and a client; and the communication between the two is via REST API. The server communicates the instructions to the client. On older Windows and Mac systems, you can take advantage of the Docker toolbox, which allows you to control the Docker engine using Compose and Kitematic.

Now that we have learned about Docker, it’s advantages, and how it works, our next focus in this getting started with docker tutorial is to learn the various components of Docker.

Docker images

The Docker image is built within the YAML file and then hosted as a file in the Docker registry. The image has several key layers, and each layer depends on the layer below it. Image layers are created by executing each command in the Dockerfile and are in the read-only format. You start with your base layer, which will typically have your base image and your base operating system, and then you will have a layer of dependencies above that. These then comprise the instructions in a read-only file that would become your Dockerfile.

What happens when you pull in a layer but something changes in the core image? Interestingly, the main image itself cannot be modified. Once you’ve copied the image, you can modify it locally. You can never modify the actual base image.

Docker Registry

The Docker registry is where you would host various types of images and where you would distribute the images from. The repository itself is just a collection of Docker images, which are built on instructions written in YAML and are very easily stored and shared. You can give the Docker images name tags so that it’s easy for people to find and share them within the Docker registry. One way to start managing a registry is to use the publicly accessible Docker hub registry, which is available to anybody. You can also create your own registry for your own use internally.

Docker Container

The Docker container is an executable package of applications and its dependencies bundled together; it gives all the instructions for the solution you’re looking to run. It’s really lightweight due to the built-in structural redundancy. The container is also inherently portable. Another benefit is that it runs completely in isolation. Even if you are running a container, it’s guaranteed not to be impacted by any host OS securities or unique setups, unlike with a virtual machine or a non containerized environment. The memory for a Docker environment can be shared across multiple containers, which is really useful, especially when you have a virtual machine that has a defined amount of memory for each environment.

Docker Compose

Docker compose is designed for running multiple containers as a single service. It does so by running each container in isolation but allowing the containers to interact with one another. As noted earlier, you would write the compose environments using YAML.