(TWN) DevOps Fundamentals: Containerization with Docker
Docker is a open-source platform that has revolutionized how we build, ship, and run applications by introducing a standardised way to package software.
In this article, we will look into: what is a container? How are docker containers and images different? How are containers and virtual machines different? And deep dive into Docker.
1. What is a Container?
At its core, a container is a way to package an application along with all its necessary dependencies, libraries, and configurations. It serves as a portable artifact that can be easily shared and moved across different environments—from a developer's laptop to a testing server and finally to production—without any changes in behavior.
Where do containers live?
Containers are stored in repositories.
- DockerHub: The primary public repository for sharing Docker images.
- Private Repositories: Most companies maintain their own private registries to secure proprietary code.
How containers Improve the Development Process?
Containers allow developers to run services (like databases or message brokers) without installing them directly on their host operating system. Because each container is an isolated OS layer (typically a Linux-based image), your local machine stays clean, and the environment remains consistent for everyone on the team.
2. Containers vs. Images: The Technical Distinction
It is common to use these terms interchangeably, but they represent different states of an application:
- Image: The actual package. It is a read-only artifact containing the compiled code and dependencies. Technically, an image is made of stacked layers, usually starting with a small Linux base image.
- Container: A running instance of an image. When you "pull" an image and start it, it becomes a container.
- Not running = Image
- Running = Container
3. Containers vs. Virtual Machines (VMs)
While containers might feel like VMs, they differ in what they virtualize:
- VMs: Virtualize both the OS Application layer and the OS Kernel layer. This makes them heavy and slow to start.
- Docker: Virtualizes only the OS Application layer. It shares the host's Linux kernel, making containers significantly faster and more lightweight.
Note on Compatibility: Docker was built for Linux. On Windows or Mac, Docker Desktop uses a hypervisor layer with a lightweight Linux distribution to provide the necessary kernel.
4. Deep-Dive into Docker
Docker Architecture & Components
The Docker Engine consists of three primary components:
- Docker Server (Daemon): Manages images, containers, networks, and volumes.
- Docker API: The interface used to interact with the Docker Server.
- Docker CLI: The command-line client where users execute commands.
Essential Docker Commands
To manage the lifecycle of a container, you will frequently use these commands:
docker pull: Download an image from a registry.docker images: List all locally stored images.docker ps: List running containers.docker run: Create and start a container from an image.
Port Mapping
Since containers run in isolation, you must map the Container Port to a Host Port to access the application from your machine. Multiple containers can run on a single host as long as their host ports do not conflict.
5. Docker Compose
Orchestration with Docker Compose and YAML
When an application requires multiple services (e.g., a Node.js backend and a MongoDB database), managing individual docker run commands becomes tedious. This is where Docker Compose comes in.
What is YAML?
YAML (which stands for "YAML Ain't Markup Language") is a human-friendly configuration format. It is critical for docker-compose.yml files because it allows you to define multi-container applications in a declarative way.
Dockerfile vs. Docker Compose
- Dockerfile: A text file containing instructions to build a single image (e.g.,
FROM node,COPY,RUN). - Docker-compose.yml: A file used to run and orchestrate multiple containers together, defining their networks, volumes, and startup order.
6. Networking & Storage in Docker
Data Persistence and Networking
- Docker Network: Docker creates isolated networks for containers. Within a Compose file, containers can communicate with each other simply by using their service names.
- Docker Volumes: By default, data inside a container is lost when the container is deleted. Volumes allow you to persist data generated by the container (like database records) on the host machine.
7. Connection between Docker & Jenkins
The CI/CD Workflow: Jenkins and Docker
In a professional pipeline, Docker is integrated into the Continuous Integration (CI) process:
- Development: A developer writes code and a Dockerfile.
- Commit: The code and Dockerfile are pushed to a Git repository.
- Build: A CI server (like Jenkins) takes the Dockerfile, builds a Docker image (e.g., turning a Java JAR into an image), and tags it.
- Push: Jenkins pushes the image to a private registry using the naming convention:
registryDomain/imageName:tag. - Deploy: The development or production servers pull the new image and run it.
8. Docker Best Practices
To ensure your containers are secure and efficient, follow these industry standards:
- Use Official Images: Always start with verified base images.
- Keep Images Small: Use "Slim" or "Alpine" versions of images to reduce the attack surface and download time.
- Optimize Caching: Order your Dockerfile commands from least to most frequently changing to take advantage of layer caching.
- Use .dockerignore: Explicitly exclude files (like
node_modulesor logs) from being sent to the Docker daemon. - Multi-stage Builds: Use one stage for building the app and a second, smaller stage for running it.
- Least Privilege: Avoid running containers as the
rootuser; create a dedicated user within the Dockerfile.
In Conclusion:
By adopting Docker, you simplify complex deployments—such as setting up a Nexus repository or a MongoDB cluster—into a few lines of code, ensuring that your application runs perfectly everywhere.
Comments
Post a Comment