xfeng

xfeng

健身 技术 阅读 思考 记录
tg_channel
tg_channel
github
bilibili
tg_channel

Docker Summary

image

1. Overview#

This article introduces relevant knowledge about Docker, focusing on the three core components of Docker: images, containers, and repositories.

image

2. Introduction to Docker#

  • Docker is the world's leading software container platform.
  • Docker is developed using the Go language introduced by Google, based on technologies such as cgroups, namespaces, and AUFS-like UnionFS, which encapsulate and isolate processes, belonging to operating system-level virtualization technology. Since isolated processes are independent of the host and other isolated processes, they are also referred to as containers. The initial implementation of Docker was based on LXC.
  • Docker can automatically execute repetitive tasks, such as setting up and configuring development environments, freeing developers to focus on what truly matters: building outstanding software.
  • Users can easily create and use containers to place their applications. Containers can also be versioned, copied, shared, and modified, just like managing regular code.

image

Docker is designed to fully utilize Union FS technology, structuring it as a layered storage architecture.

3. Docker Philosophy#

image

  • Containerization: Place all necessary content into different containers, and anyone who needs certain content (environment) can directly take the corresponding container.
  • Standardization:
    • Standardization of transportation: Docker has a dock where all uploaded containers are placed. When someone needs a specific environment, they can directly send [Little Blue Whale]^(Docker's mascot) to transport that container.
    • Standardization of commands: Docker provides a series of commands to help us perform operations related to containers.
    • Provides REST APIs: Leading to many graphical interfaces, such as: [Rancher]^(an open-source enterprise-level container management platform).
  • Isolation: When running the contents of a container, Docker allocates a separate space in the Linux kernel, which does not affect other programs.
  • Central repository/registry: A super dock where all containers are placed.
  • Image: A container.
  • Container: A running image (which packages software into standardized units for development, delivery, and deployment. In simple terms, a container is a place to store things, just like a backpack can hold various stationery, a wardrobe can hold various clothes, and a shoe rack can hold various shoes).

4. Container VS Virtual Machine#

image

A container is an application layer abstraction used to package code and dependent resources together. Multiple containers can run on the same machine, sharing the operating system kernel, but each runs as an independent process in user space. Compared to virtual machines, containers take up less space (container images are typically only a few dozen megabytes) and can start almost instantly.

A virtual machine (VM) is a physical hardware layer abstraction used to turn one server into multiple servers. A hypervisor allows multiple VMs to run on one machine. Each VM contains a complete operating system, one or more applications, necessary binaries, and library resources, thus occupying a large amount of space. Additionally, VMs start up very slowly.

5. Installing Docker#

  • Install dependencies
yum install -y yum-utils device-mapper-persistent-data lvm2
  • Specify Docker image source
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  • Install Docker
yum makecache fast
yum -y install docker-ce
  • Start Docker
systemctl start docker
systemctl enable docker
docker run hello-world

6. Docker Central Repository#

image

Docker's official central repository: (This repository has the most complete images, but the download speed is relatively slow)

https://hub.docker.com/

Domestic mirror sites:

https://c.163yun.com/hub#/home

http://hub.daocloud.io/

Once an image is built, it can run on the current host machine. However, if you need to use this image on other servers, a centralized storage and distribution service for images is required, and Docker Registry is such a service.

A Docker Registry can contain multiple repositories, each repository can contain multiple tags; each tag corresponds to an image.

Typically, a repository will contain images of different versions of the same software, and tags are commonly used to correspond to various versions of that software.

Concepts of public Docker Registry services and private Docker Registry:

  1. Public Docker Registry services are open for user use and allow users to manage images.

    Generally, such public services allow users to upload and download public images for free and may provide paid services for managing private images.

    The most commonly used public Registry service is the official Docker Hub, which is also the default Registry and has a large number of high-quality official images.

  2. In addition to using public services, users can also set up a private Docker Registry locally. Docker provides a Docker Registry image that can be used directly as a private Registry service.

    The open-source Docker Registry image only provides the server-side implementation of the Docker Registry API, sufficient to support Docker commands without affecting usage. However, it does not include a graphical interface or advanced features like image maintenance, user management, and access control.

7. Images#

image

Image:
A special file system (actually composed of multiple layered file systems).

The operating system is divided into kernel and user space. For Linux, after the kernel starts, it mounts the root file system to provide support for user space, while Docker images are equivalent to a root file system.

In addition to providing the programs, libraries, resources, and configurations needed for container runtime, Docker images also contain some configuration parameters prepared for runtime, such as: anonymous volumes, environment variables, and users.

Images do not contain any dynamic data, and their content does not change after being built.

When building an image, it is built layer by layer, with the previous layer serving as the foundation for the next layer. Once a layer is built, it does not change anymore; any changes to the next layer only occur within that layer.

  • Pull an image (pull an image from the central repository to local)
docker pull image_name[:tag]
  • View all local images (view information about images that have been installed locally, including ID, name, version, update time, size)
docker images
  • Delete a local image (images occupy disk space and can be directly deleted)
docker rmi image_id
  • Import and export images
# Export a local image
docker save -o export_path image_id
# Load a local image file
docker load -i image_file
# Modify image name
docker tag image_id new_image_name:version

8. Containers#

image

Container:

A running image.

The relationship between an image (Image) and a container (Container) is similar to class and instance in object-oriented programming.

An image is a static definition, while a container is the entity of the image at runtime. Containers can be created, started, stopped, deleted, paused, etc.

The essence of a container is a process, but unlike processes executed directly on the host, container processes run in their own independent namespace. As mentioned earlier, images use layered storage, and containers do as well.

The lifecycle of a container's storage layer is the same as that of the container; when the container dies, the container's storage layer also disappears. Therefore, any information saved in the container's storage layer will be lost when the container is deleted.

According to Docker's best practices, containers should not write any data to their storage layers; the container storage layer should remain stateless.

All file write operations should use data volumes or bind host directories, where read and write operations will bypass the container storage layer and directly read and write to the host (or network storage), providing higher performance and stability.

The lifecycle of a data volume is independent of the container; when the container dies, the data volume does not. Therefore, after using data volumes, containers can be deleted or re-run at will without losing data.

  • Run a container (running a container requires specifying a specific image; if the image does not exist, it will be downloaded directly)
# Simple operation
docker run image_id|image_name[:tag]
# Common parameters
docker run -d -p host_port:container_port --name container_name image_id|image_name[:tag]
# -d: represents running the container in the background
# -p: host_port:container_port: to map the current Linux port to the container's port
# --name: container_name: specify the name of the container
  • View running containers
docker ps [-qa]
# -a: view all containers, including those not running
# -q: only view container IDs
  • View container logs (view the logs of the container to see runtime information)
docker logs -f container_id
# -f: can scroll to view the last few lines of logs
  • Enter the container (you can enter the container to perform operations)
docker exec -it container_id bash
  • Copy content to the container (copy files from the host to a specified directory inside the container)
docker cp file_name container_id:container_internal_path
  • Restart/start/stop/delete a container (operations for starting, stopping, and deleting containers are frequently used)
# Restart the container
docker restart container_id
# Start a stopped container
docker start container_id
# Stop a specified container (before deleting a container, it needs to be stopped first)
docker stop container_id
# Stop all containers
docker stop $(docker ps -qa)
# Delete a specified container
docker rm container_id
# Delete all containers
docker rm $(docker ps -qa)

9. Docker Applications#

  • Install Tomcat with Docker
docker run -d -p 8080:8080 --name tomcat daocloud.io/library:8.5.15-jre8
  • Install MySQL with Docker
docker run -d -p 3306:3306 --name mysql -e MYSQL_ROOT_PASSWORD=root daocloud.io/library/mysql:5.7.4

10. Data Volumes#

image

Considerations:

Data generated in Docker containers will be destroyed when the container is deleted.

Can Docker containers directly exchange files with external machines?

How can containers interact with each other?

Data volumes:

A data volume is a directory or file on the host machine.

When a container directory is bound to a data volume directory, modifications on either side will be immediately synchronized.

A data volume can be mounted by multiple containers simultaneously.

A container can also mount multiple data volumes.

  • Create a data volume (after creating a data volume, it will be stored in a directory /var/lib/docker/volumes/data_volume_name/_data by default)
docker volume create data_volume_name
  • View data volume details (view detailed information about the data volume, including storage path, creation time, etc.)
docker volume inspect data_volume_name
  • View all data volumes
docker volume ls
  • Delete a data volume
docker volume rm data_volume_name
  • Container mapping data volumes

There are two mapping methods:

  1. Map by data volume name; if the data volume does not exist, Docker will automatically create it and store the files that come with the container in the default storage path.
  2. Map the data volume by specifying a path, but this path is empty.
# Map by data volume name
docker run -v data_volume_name:container_internal_path image_id
# Map the data volume by path
docker run -v path:container_internal_path image_id

11. Custom Images with Dockerfile#

We can download an image from the central repository or manually create an image by specifying custom image information through a Dockerfile.

image

12. Docker-Compose#

Previously, running an image required adding a lot of parameters, which can be written through Docker-Compose.

Docker-Compose can also help us manage containers in bulk.

This information only needs to be maintained through a docker-compose.yml file.

12.1 Download and Install Docker-Compose#

  • Download Docker-Compose

  • Set permissions (you need to rename the DockerCompose file and give it executable permissions)

mv docker-compose-linux-x86_64 docker-compose
chmod 777 docker-compose
  • Configure environment variables (to facilitate future operations, configure an environment variable)
mv docker-compose /usr/local/bin

vim /etc/profile
# Add content export PATH=$JAVA_HOME:/usr/local/bin:$PATH

source /etc/profile
  • Test (enter any directory and type the docker-compose command)

12.2 Docker-Compose Managing MySQL and Tomcat Containers#

The yml file specifies configuration information in key format.

Multiple configuration items are distinguished by line breaks and indentation.

Do not use tabs in the docker-compose.yml file.

version: '3.1'
services:
	mysql:	# Service name
		restart: always	# This means that whenever Docker starts, this container will start with it
		image: daocloud.io/library/mysql:5.7.4	# Specify the image path
		container_name: mysql	# Specify the container name
		ports:
			- 3306:3306	# Specify port mapping
		environment: 
			MYSQL_ROOT_PASSWORD: root	# Specify MySQL root user login password
			TZ: Asia/Shanghai	# Specify timezone
		volumes:
			- /opt/docker_mysql_tomcat/mysql_data:/var/lib/mysql	# Map data volume
	tomcat:
		restart: always
		image: daocloud.io/library/tomcat:8.5.15-jre8
		container_name: tomcat
		ports:
			- 8080:8080
		environment:
			TZ: Asia/Shanghai
		volumes:
			- /opt/docker_mysql_tomcat/tomcat_webapps:/usr/local/tomcat/webapps
			- /opt/docker_mysql_tomcat/tomcat_logs:/usr/local/tomcat/logs

12.3 Using docker-compose Commands to Manage Containers#

When using docker-compose commands, it will look for the docker-compose.yml file in the current directory by default.

# 1. Start managed containers based on docker-compose.yml
docker-compose up -d

# 2. Stop and remove containers
docker-compose down

# 3. Start|Stop|Restart existing containers managed by docker-compose
docker-compose start|stop|restart

# 4. View containers managed by docker-compose
docker-compose ps

# 5. View logs
docker-compose logs -f

12.4 Using docker-compose with Dockerfile#

Use the docker-compose.yml file and Dockerfile to generate custom images while starting the current image, and let docker-compose manage the containers.

  • Docker-compose file (write the docker-compose.yml file)
# yml file
version: '3.1'
services:
	ssm:
		restart: always
		build:	# Build custom image
			context: ../	# Specify the path where the Dockerfile is located
			dockerfile: Dockerfile	# Specify the Dockerfile name
		image: ssm:1.0.1
		container_name: ssm
		ports:
			- 8081:8080
		environment:
			TZ: Asia/Shanghai
  • Dockerfile (write the Dockerfile)
from daocloud.io/library/tomcat:8.5.15-jre8
copy ssm.war /usr/local/tomcat/webapps
  • Run (test the effect)
# You can directly start the custom image built based on the docker-compose.yml and Dockerfile files
docker-compose up -d
# If the custom image does not exist, it will help us build the custom image; if the custom image already exists, it will run this custom image directly.

# Rebuild the custom image
docker-compose build
# Run the current content and rebuild
docker-compose up -d --build

13. Conclusion#

This article mainly elaborates on some common concepts in Docker in detail.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.