Containers are a hot topic in the IT industry. Understanding containers and knowing how to work with them is an essential skill to have for any IT professional. containerization allows us to deploy our application as containers, by this method we will be able to scale our application much more quickly and efficiently.
A container is a run-time instance of an image. In a container world an image is nothing but a file that contains the following data application code, libraries, tools, dependencies, and other files needed to make an application run container uses the underlying OS kernel and namespace to work, so containers don't need a dedicated OS to work with. To deploy a container we require a container runtime. Popular and widely used runtime is Docker.
Docker is a container run time that allows us to deploy containers. Docker is available for Linux, Windows, and Mac.
Container only runs on Linux so for it to run on other OS, docker creates a Linux virtual machine on top of the OS and then deploys the docker engine. It's a seamless process. If you have tried installing docker in Windows, you might have noticed that the installation asks you to enable either Hyper-V or Windows Subsystem for Linux, this is because Docker natively only supports Linux base OS.
In this session, we are going to deploy docker in Linux server/machine.
Requirements
Linux version: Ubuntu and red hat are the most used Linux versions for docker deployment, but others should work fine too.
Adding docker repository to our server: By this our server can download the files of docker from its repository. A repository means a place where files are stored.
Add official dockers GPG key to your server:
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Note: GPG, or GNU Privacy Guard, is a public key cryptography implementation. This allows for the secure transmission of information between parties and can be used to verify that the origin of a message is genuine.
Add the docker repository to the server
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
3.Once the repository is added, we have to refresh the package manager. Run the following command to add this repository to our apt package manager so that it can search this repository when we search docker-related files and packages
sudo apt-get update
4. Install Docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
It will ask to confirm the installation type 'Y' to proceed.Once this step is done,Docker will be installed in our server.
5. Verify the docker version to confirm
docker --version
If every thing went fine, you could see the output similar to this(below image). Make sure client and server version are present. if not, reinstall the docker.
Note : Run the following command to start the docker, incease if it is not automatically started
systemctl start docker
To make docker automatically start after a reboot, Run the command
systemctl enable docker
Congratulations. Docker has been successfully installed.
If you want to learn more about docker and commands visit my GitHub page. I have created a document showing all daily used docker commands : Click here
By default docker will only run with root access. It is not recommended to use docker as root user. Instead we have to provide docker access to normal or non root user.
Run the following command to add a normal user to docker group to allow the user to run docker commands.
Creating a group name docker
groupadd docker
Adding a non root user 'user1' to docker group.
usermod -a -G docker user1
Note: Log out and log in the user1 to run docker commands!
We are now going to deploy a nginx docker container.(Nginx is a web server)
Basic docker command that we are going to use
docker images : to see the images available in our server
docker pull <image name> : to download an image to our server
docker run or docker create : to create a container
docker pull nginx
This command pulls nginx image from docker hub ( hub.docker.com). docker hub is a public docker registry that stores docker images.
Once the image is downloaded you can verify it by running the command
docker images
Run this command to create a container with that image
docker run -p 8080:80 nginx
Explanation of the command : it is the keyword command to run any docker command. eg : docker pull or docker run
run: command to run an image
-p 8080:80 : -p denote publish. 8080 is the port in which our application is accessible from the host machine/server. 80 is the port in which Nginx is running inside the container. We are mapping container port 80 to host port 8080.
Note: We cannot directly access the container from the internet unless we link the container port to the host port.
Verify the Webserver
go to the following IP in your browser
<serverIP>:8080: server IP is the IP address of your server.
We have successfully installed docker and deployed our first application as a docker container.
Docker helps to deploy applications quickly and in a sandbox. we can create any number of instances of this application by running multiple containers. this makes our application scalable. As this depends on docker, this image will run on any OS or machine with docker installed we don't need to care about the dependency issues. We can also deploy docker as a cluster called docker swarm which will help us to make highly available applications. A docker swarm is a cluster with multiple servers running docker with master and worker nodes. It also helps to orchestrate the docker deployment and makes our life much easier.
About the Author
Aswin KS is a Cloud Engineer with 4+ years of experience designing, implementing and managing cloud infrastructure for various industries. Proven track record of designing and deploying scalable, secure and cost-effective solutions on AWS, Azure, Vmware. Strong understanding of cloud computing architectures, virtualization, containerization,servers, DevOps and security.
Also Read: