Docker 튜토리얼 : Docker 시작하기

컨테이너는 일반적으로 VM과 관련된 오버 헤드 및 대량없이 가상 머신처럼 애플리케이션 워크로드를 이식 할 수있는 가벼운 방법을 제공합니다. 컨테이너를 사용하면 앱과 서비스를 패키징하고 물리적, 가상 또는 클라우드 환경간에 자유롭게 이동할 수 있습니다.

Docker Inc.에서 만든 컨테이너 생성 및 관리 시스템 인 Docker는 Linux에있는 기본 컨테이너 기능을 사용하여 명령 줄 인터페이스와 API 집합을 통해 최종 사용자가 사용할 수 있도록합니다.

이제 많은 공통 애플리케이션 구성 요소를 사전 패키징 된 Docker 컨테이너로 사용할 수 있으므로 소프트웨어 스택을 분리 된 구성 요소 (마이크로 서비스 모델)로 쉽게 배포 할 수 있습니다. 즉, 조각이 안쪽에서 바깥쪽으로 어떻게 서로 맞는지 아는 것이 도움이됩니다.

따라서이 가이드에서는 Docker 컨테이너에 Apache 웹 서버를 설치하고 Docker가 어떻게 작동하는지 조사합니다.

Docker 설치

Docker 빌드의 기반으로 Ubuntu를 사용하고 있습니다. Ubuntu는 인기 있고 널리 사용되는 배포판이 아니지만 Docker 팀 자체는 Ubuntu를 개발에 사용하며 Docker는 버전 12.04 이상부터 Ubuntu Server에서 지원됩니다. 단순성을 위해 Ubuntu 16.04를 새로 설치할 때 지침으로 시작합니다.

Docker 용 Ubuntu Linux 준비

가장 먼저 할 일은 적절한 버전의 커널과 헤더를 구하는 것입니다.

$ sudo apt-get install --install-recommends linux-generic-hwe-16.04

이 프로세스는 다소 시간이 걸릴 수 있으며 완료되면 재부팅해야합니다.

$ sudo reboot

나중에 시스템의 다른 패키지도 업그레이드해야 할 수도 있습니다.

$ sudo apt-get update

$ sudo apt-get upgrade

Ubuntu에 Docker 설치

CentOS, Fedora, Debian, Ubuntu 및 Raspbian Linux 배포판에 Docker를 설치하는 것은 //get.docker.com/에서 다운로드 할 수있는 셸 스크립트를 통해 쉽게 수행됩니다. 이를 위해서는 curl명령 이 필요합니다 . 최신 버전을 얻으려면 curl:

sudo apt-get install curl

curl설치 가 완료되면 설치 스크립트를 가져 와서 실행하도록 설정하십시오.

curl -s //get.docker.com | sudo sh

스크립트가 설치를 완료하면 클라이언트 및 서버 구성 요소 모두에 대한 설치 세부 정보와 함께 다음과 같은 메모가 표시됩니다.

Docker에 루트가 아닌 사용자를 추가하는 방법에 대한 세부 정보는 하단 근처에 있습니다. 이 작업을 수행하는 것이 편리하지만 그렇게 할 경우 특별히 Docker로 작업하고 다른 기능은 사용하지 않는 비 루트 사용자를 만드는 것이 좋습니다. 하지만이 자습서에서는 sudo권한이없는 사용자를 통해 Docker를 실행 하는 데 사용 합니다.

이제 기본 Docker 컨테이너를 테스트 할 수 있습니다.

$ sudo docker run -i -t ubuntu /bin/bash

이 명령은 ubuntu매개 변수에 따라 일반 Docker Ubuntu 이미지를 다운로드하고 /bin/bash해당 컨테이너 에서 명령을 실행합니다 . -i-t옵션은 표준 입력 각각 의사 TTY를 엽니 다. 

성공하면 명령 프롬프트의 호스트 이름이으로 변경되어 [email protected]:/#새 실행 컨테이너의 ID 번호 (및 호스트 이름)를 나타냅니다. 나가려면 exit셸 세션을 나가는 것과 동일하게를 입력 합니다.

이제 서버에 작동하는 Docker 설치가 있어야합니다. 다음 docker info명령을 사용하여 테스트하고 기본 정보를 얻을 수 있습니다 .

$ sudo docker info

The output of the docker info command shows the number of containers and images, among other pertinent information. Note that it may be quite lengthy; this example shows only the last of two pages.

One last change you will need to make if you’re running Ubuntu’s UFW firewall is to allow for packet forwarding. You can check whether UFW is running by entering the following:

$ sudo ufw status

If the command returns a status of inactive, you can skip this next step. Otherwise you will need to edit the UFW configuration file /etc/default/ufw and change the policy for forwarding from DROP to ACCEPT. To do this using the Nano editor, enter the following:

$ sudo nano /etc/default/ufw

And change this line:

DEFAULT_FORWARD_POLICY="DROP"

To this:

DEFAULT_FORWARD_POLICY="ACCEPT"

Save the file, then run:

$ sudo ufw reload

Work with Docker images and Docker containers

Docker containers are much more efficient than virtual machines. When a container is not running a process, it is completely dormant. You might think of Docker containers as self-contained processes—when they’re not actively running, they consume no resources apart from storage.

You can view active and inactive containers using the docker ps command:

# This command will show ALL containers on the system

$ sudo docker ps  -a

# This will show only RUNNING containers

$ sudo docker ps       

You can view all available commands by simply entering docker. For an up-to-date rundown of all commands, their options, and full descriptions, consult the official command-line client documentation.

When I ran docker run earlier, that command automatically pulled an Ubuntu container image from the Docker Hub registry service. Most of the time, though, you’ll want to pull container images into the local cache ahead of time, rather than do that on demand. To do so, use docker pull, like this:

$ sudo docker pull ubuntu

A full, searchable list of images and repositories is available on the Docker Hub.

Docker images vs. containers

Something worth spelling out at this point is how images, containers, and the pull/push process all work together.

Docker containers are built from images, which are essentially shells of operating systems that contain the necessary binaries and libraries to run applications in a container.

Images are labeled with tags, essentially metadata, that make it easy to store and pull different versions of an image. Naturally, a single image can be associated with multiple tags: ubuntu:16.04, ubuntu:xenial-20171201, ubuntu:xenial, ubuntu:latest.

When I typed docker pull ubuntu earlier, I pulled the default Ubuntu image from the Ubuntu repository, which is the image tagged latest. In other words, the command docker pull ubuntu is equivalent to docker pull ubuntu:latest and (at the time of this writing) docker pull ubuntu:xenial

Note that if I had typed: 

$ sudo docker pull -a ubuntu

I would have puledl all images (the -a flag) in the Ubuntu repository into my local system. Most of the time, though, you will want either the default image or a specific version. For example, if you want the image for Ubuntu Saucy Salamander, you’d use docker pull -a ubuntu:saucy to fetch the image with that particular tag from that repo.

The same logic behind repos and tags applies to other manipulations of images. If you pulled saucy as per the above example, you would run it by typing sudo docker run -i -t ubuntu:saucy /bin/bash. If you type sudo docker image rm ubuntu, to remove the ubuntu image, it will remove only the image tagged latest . To remove images other than the default, such as Ubuntu Saucy, you must include the appropriate tag:

sudo docker image rm ubuntu:saucy

Docker image and container workflow

Back to working with images. Once you’ve pulled an image, whatever it may be, you create a live container from it (as I’ve shown) by executing the docker run command. After you have added software and changed any settings inside the container, you can create a new image from those changes by using the docker commit command.

It’s important to note that Docker only stores the deltas, or changes, in images built from other images. As you build your own images, only the changes you make to the base image are stored in the new image, which links back to the base image for all its dependencies. Thus you can create images that have a virtual size of 266MB, but take up only a few megabytes on disk, due to this efficiency.

Fully configured containers can then be pushed up to a central repository to be used elsewhere in the organization or even shared publicly. In this way, an application developer can publish a public container for an app, or you can create private repositories to store all the containers used internally by your organization.

Create a new Docker image from a container

Now that you have a better understanding of how images and containers work, let’s set up a Apache web server container and make it permanent.

Start with a new Docker container

First, you need to build a new container. There are a few ways to do this, but because you have a few commands to run, start a root shell in a new container:

$ sudo docker run -i -t --name apache_web ubuntu /bin/bash

This creates a new container with a unique ID and the name apache_web. It also gives you a root shell because you specified /bin/bash as the command to run. Now install the Apache web server using apt-get:

[email protected]:/# apt-get update

[email protected]:/# apt-get install apache2

Note that you don’t need to use sudo, because your’re running as root inside the container. Note that you do need to run apt-get update, because, again, the package list inside the container is not the same as the one outside of it.

The normal apt-get output appears, and the Apache2 package is installed in your new container. Once the install has completed, start Apache, install curl, and test the installation, all from within your container:

[email protected]:/# service apache2 start

[email protected]:/# apt-get install curl

[email protected]:/# curl //localhost

Following the last command, you should see the raw HTML of the default Apache page displayed in the console. This means our Apache server is installed and running in your container.

If you were doing this in a production environment, you’d next configure Apache to your requirements and install an application for it to serve. Docker letd directories outside a container be mapped to paths inside it, so one approach is to store your web app in a directory on the host and make it visible to the container through a mapping.

Create a startup script for a Docker container

Remember that a Docker container runs only as long as its process or processes are active. So if the process you launch when you first run a container moves into the background, like a system daemon, Docker will stop the container. Therefore, you need to run Apache in the foreground when the container launches, so that the container doesn’t exit as soon as it fires up.

Create a script, startapache.sh, in /usr/local/sbin: 

# You might need to first install Nano inside the container

[email protected]:/# apt-get install nano

[email protected]:/# nano /usr/local/sbin/startapache.sh

In the startapache.sh file, add these lines:

#!/bin/bash

. /etc/apache2/envvars

/usr/sbin/apache2 -D FOREGROUND

Write the changes and save the file. Then make it executable:

[email protected]:/# chmod +x /usr/local/sbin/startapache.sh

All this small script does is bring in the appropriate environment variables for Apache and start the Apache process in the foreground.

You’re done modifying the contents of the container, so you can leave the container by typing exit. When you exit the container, the container will stop.

Commit the container to create a new Docker image

Now you need to commit the container to save the changes you’ve made:

$ sudo docker commit apache_web local:apache_web

The commit will save your container as a new image and return a unique ID. The argument local:apache_web will cause the commit to be placed in a local repository named local with a tag of apache_web.

You can see this by running the command sudo docker images:

REPOSITORY  TAG         IMAGE ID      CREATED      VIRTUAL SIZE

local       apache_web  d95238078ab0  4 minutes ago  284.1 MB

Note that the exact details of your image—the image ID, the size of the container—will be different from my example.

Docker containers are designed to be immutable. Whenever you commit changes to a container, the results are written out to an entirely new container, never to the original. If you want to swap out Apache with, say, Nginx, you would start with the original ubuntu:latest container, add Nginx to that, and save out the results as an all-new container named something like local:nginx.

Understand Docker networking basics

Now that you have our image, you can start our container and begin serving pages. Before you do, however, let me take a moment to explain how Docker handles networking.

When Docker is installed, it creates three virtual networks that can be used by Docker containers:

  • bridge: This is the network that containers connect to by default. The bridge network allows containers to talk to each other directly, but not to the host system.
  • host: This network lets containers be seen by the host directly, as if any apps within them were running as local network services.
  • none: This is essentially a null or loopback network. A container connected to none can’t see anything but itself.

When you want to launch a container and have it communicate with both other containers and the outside world, you need to manually map ports from that container to the host. For the sake of my example, you can do this on the command line when you launch your newly created container:

$ sudo docker run -d -p 8080:80 --name apache local:apache_web /usr/local/sbin/startapache.sh