How to Install Docker on Ubuntu 20.04
Docker is a platform that is currently popular among software developers and system administrators. One of the advantages of docker is that it allows companies to develop software faster, optimize information technology infrastructure, and ultimately provide significant-efficiency solutions for users.
What is Docker?
Docker is an open platform for developers and system administrators to build, deploy, and run distributed applications, whether on laptops, virtual data centers, or cloud data centers.
The Docker container system wraps pieces of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on the server. This ensures that the software will always run the same, regardless of the environment. This is one of the main advantages of docker.
Containers run on a single machine that shares the same operating system kernel, and can be started directly, and uses less RAM. An image (a collection of files and folders) is built from a layered filesystem and shared common files, making disk usage and image downloading much more efficient.
With the existing advantages of Docker, Docker has been widely adopted by large companies and several other platforms such as Nginx, MySQL, Postgres, CAdvisor, Ubuntu, MongoDB, and so on.
Some of the Advantages of Docker as a Container Platform
Container systems are a solution to the problem of how to run software reliably when moving from one computing environment to another. This can mean moving from developer laptops to test environments, from staging environments to production, and possibly from physical devices in the data center to virtual machines in a private or public cloud.
Docker is one of the more popular open-source projects, where it is possible to deploy applications in a container, as well as add layers of abstraction. In a constant state of maturation, the benefits of using Docker can be increased on a regular basis.
Here are some of the advantages of docker:
Can do Testing and Distribution of Applications Continuously
Docker has caught the attention of developers around the world for its ability to provide consistency across operating system environments. There are always minor differences between a development environment and a released application unless you have your own private repository environment with strict checks in place.
These differences may be due to different package versions or dependencies. However, Docker can address this gap by ensuring a consistent environment from development to production. Docker containers are configured to maintain all internal configurations and dependencies. The goal is that you can use the same container from development to software products and ensure there are no manual differences or interventions.
With Docker containers or containers, you can also ensure that developers don't need a production environment that is set up identically. Instead, they can use their own system to run Docker containers on VirtualBox.
Another advantage of Docker is that it can run the same containers on Amazon EC2. If you do need to upgrade during this product's release cycle, you can easily make the necessary changes to Docker containers, test them, and apply the same changes to existing containers.
Just like standard deployment and integration processes, these advantages of Docker allow anyone to build, test and release images that can be deployed on multiple servers. Even if a new security patch is available, the process remains the same. You can apply patches, test them, and release them into production.
Docker containers can run in Amazon EC2 Instance and Google Compute Engine, Rackspace Server or VirtualBox, provided the host OS supports Docker. If this is the case, a container running on an Amazon EC2 Instance can be easily ported between environments, just as VirtualBox can achieve similar consistency and functionality. This provides a level of abstraction from the infrastructure layer.
In addition to AWS and GCP, Docker works very well with various other IaaS providers such as Microsoft Azure, and OpenStack, and can be used with various configuration management such as Chef, Puppet, and Ansible.
Environmental Standardization and Version Control
As discussed above, Docker containers ensure consistency across multiple developments and release cycles, standardizing the environment. On top of that, Docker containers work like a GIT repository, allowing for making changes to the Docker Image and controlling versions.
For example, in upgrading a component that breaks the entire environment. It is very easy to roll back to a previous version via your Docker image. This whole process can be tested in a few minutes. When compared to backup and image generation in a VM, Docker is much faster and allows for quickly iterating and achieving redundancy. Also, launching Docker images can be as fast as running the machine.
Docker ensures isolated and separate applications and resources. A few months ago, Gartner published a report stating Docker containers are as good as VM hypervisors in terms of resources to isolate, but there is still work to be done in terms of management and administration.
Consider a scenario where you are running multiple applications on a VM. These applications can be team collaboration software (e.g., RedBooth), problem-tracking software (e.g., Jira), Centralized identity management systems (e.g., Crowd) and so on. Seeing as all those applications are running on different ports, you should be able to leverage those applications on Apache and Nginx as a reverse proxy.
So far, everything is in good shape, but as the environment moves forward, you will also need to configure a content management system (eg, Alfresco) to the existing environment. Note that this requires a different version of Apache Tomcat, which will cause problems. In order to fix this, you can either move your existing application to another version of Tomcat or run your content management system (Alfresco) on the current version.
Fortunately, with Docker, you don't have to do this. Docker ensures that each container has its own resources that are isolated from other containers. You can have different containers for separate applications running entirely on completely different stacks.
Apart from that, being able to effectively remove applications from the server is quite difficult and can cause conflicts. However, with the advantages of Docker, it can help in ensuring clean application removal, as each application runs in its own container. If an application is no longer needed, you just have to remove the container. This method will not leave any temporary or configuration files on your host OS.
On top of these benefits, Docker also ensures that each application only uses the resources (CPU, memory, and disk space) that have been assigned to them. A particular application will not drain all available resources, which will usually lead to decreased performance or prolonged downtime for other applications running the same.
From a security point of view, Docker ensures that applications running on containers are completely separate and isolated from each other, providing complete control over management and traffic flow. Docker containers cannot see inside processes running inside other containers. From an architectural point of view, each container can manage its own resources from processing to networking.
As a means of tightening security, Docker uses host OS sensitive mount points (e.g., '/proc' and '/sys') as read-only mount points and uses a copy-on-write filesystem to ensure containers cannot read each other's data.
This approach also limits system calls to your host OS and works well with SELinux as well as AppArmor. In addition, Docker images available on the Docker Hub have a digital signature to ensure authenticity. Because Docker containers are isolated and resources can be limited, even if one of your applications is hacked, it will not affect applications running on other Docker containers.
How to Install Docker
- Update package index and install dependencies.
sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
- Download GPG key for Docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
- Install the Docker repository.
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu\
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- Update again and install docker-ce.
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io
- In order to run docker without sudo, create a group with the name docker and put the user in the docker group.
sudo groupadd docker
sudo usermod -aG docker $USER
Log out and log back in.
- Test run docker and show the version.
To create a container we need an image available at hub.docker.com (Docker registry).
- Search for images, for example nginx image.
docker search nginx
- Download (pull) the nginx image.
docker pull nginx
- Displays all images available locally.
docker image ls
- Displays detailed nginx image information.
docker inspect nginx
docker image inspect nginx
- If you want to delete the image.
docker rmi nginx
After the Docker image is available, we can create a container.
- Create and run a container with the webserver name from the nginx image.
docker run -d -p 80:80 --name webserver nginx
- Displays running containers.
- Nginx test, access http://127.0.0.1 or http://localhost Go to the container (bash).
docker exec -t -i webserver /bin/bash
- Stop the webserver container.
docker stop webserver
- Re-run the webserver container.
docker start webserver
- Displays the detailed information of the webserver container.
docker inspect webserver
docker container inspect webserver
- Deleting a webserver container, the container must be stopped first.
docker stop webserver
docker rm webserver
- Displays all docker commands.
- Displays help commands, for example, help for images.
docker image --help
Docker can optimize your company's IT infrastructure and also improve performance for programmers and system administrators. With a container system, all application creation, module, monitoring, and infrastructure management work can be faster, more efficient, and more secure.
In conjunction with cloud computing, the benefits mentioned above show how Docker is an effective open-source platform.
More and more companies in Indonesia are adopting Docker, especially information technology service providers and large companies that have their own data centers. It is hoped that for government agencies, the advantages of docker are a solution to save the state budget.
A mount point is a directory in a file system where additional information is logically linked to a storage location outside the operating system root and partition.
An image is a collection of files and folders that are duplicates of the original files and folders, including the file and folder structure of the operating system. Images often contain files added by System Administration Configuration Manager.