Mar 20, 2017 Current Docker for Mac (as long as 17.03.0-ce-mac1) use qcow2 as disk image format. But qcow2 is worse performance than raw format. This short script change image type qcow2 to raw. The docker-compose.development.yml file defines two Docker volumes that will be mounted into the containers from your host directory:./www - /var/html/www from the farmOS application container, which includes the entire farmOS codebase, settings.php file (for connecting to the database), and any files that are uploaded/created in farmOS.
Docker uses containers tocreate virtual environments that isolate a TensorFlow installation from the restof the system. TensorFlow programs are run within this virtual environment thatcan share resources with its host machine (access directories, use the GPU,connect to the Internet, etc.). TheTensorFlow Docker images are tested for each release.
Docker is the easiest way to enable TensorFlow GPU support on Linux since only theNVIDIA® GPU driver is required on the host machine (the NVIDIA® CUDA® Toolkit does not need tobe installed).
TensorFlow Docker requirements
- Install Docker onyour local host machine.
- For GPU support on Linux, install NVIDIA Docker support.
- Take note of your Docker version with
docker -v
. Versions earlier than 19.03 require nvidia-docker2 and the--runtime=nvidia
flag. On versions including and after 19.03, you will use thenvidia-container-toolkit
package and the--gpus all
flag. Both options are documented on the page linked above.
- Take note of your Docker version with
docker
command without sudo
, create the docker
group andadd your user. For details, see thepost-installation steps for Linux.Download a TensorFlow Docker image
The official TensorFlow Docker images are located in the tensorflow/tensorflow Docker Hub repository. Image releases are tagged using the following format:
Tag | Description |
---|---|
latest | The latest release of TensorFlow CPU binary image. Default. |
nightly | Nightly builds of the TensorFlow image. (Unstable.) |
version | Specify the version of the TensorFlow binary image, for example: 2.1.0 |
devel | Nightly builds of a TensorFlow master development environment. Includes TensorFlow source code. |
custom-op | Special experimental image for developing TF custom ops. More info here. |
Each base tag has variants that add or change functionality:
Tag Variants | Description |
---|---|
tag -gpu | The specified tag release with GPU support. (See below) |
tag -jupyter | The specified tag release with Jupyter (includes TensorFlow tutorial notebooks) |
You can use multiple variants at once. For example, the following downloadsTensorFlow release images to your machine:
Start a TensorFlow Docker container
To start a TensorFlow-configured container, use the following command form:
For details, see the docker run reference.
Examples using CPU-only images
Let's verify the TensorFlow installation using the latest
tagged image. Dockerdownloads a new TensorFlow image the first time it is run:
Let's demonstrate some more TensorFlow Docker recipes. Start a bash
shellsession within a TensorFlow-configured container:
Within the container, you can start a python
session and import TensorFlow.
Docker For Mac Raw Format Software
To run a TensorFlow program developed on the host machine within a container,mount the host directory and change the container's working directory(-v hostDir:containerDir -w workDir
):
Permission issues can arise when files created within a container are exposed tothe host. It's usually best to edit files on the host system.
Start a Jupyter Notebook server usingTensorFlow's nightly build:
Follow the instructions and open the URL in your host web browser:http://127.0.0.1:8888/?token=...
GPU support
Docker is the easiest way to run TensorFlow on a GPU since the host machineonly requires the NVIDIA® driver (the NVIDIA® CUDA® Toolkit is not required).
Install the Nvidia Container Toolkit to add NVIDIA® GPU support to Docker. nvidia-container-runtime
is onlyavailable for Linux. See the nvidia-container-runtime
platform support FAQ for details.
Check if a GPU is available:
Docker For Mac Raw Format Software
Verify your nvidia-docker
installation:
nvidia-docker
v2 uses --runtime=nvidia
instead of --gpus all
. nvidia-docker
v1 uses the nvidia-docker
alias, rather than the --runtime=nvidia
or --gpus all
command line flags.Docker Windows 10 For Mac
Examples using GPU-enabled images
Download and run a GPU-enabled TensorFlow image (may take a few minutes):
It can take a while to set up the GPU-enabled image. If repeatedly runningGPU-based scripts, you can use docker exec
to reuse a container.
Use the latest TensorFlow GPU image to start a bash
shell session in the container:
Docker has been widely adopted and is used to run and scale applications in production. Additionally, it can be used to start applications quickly by executing a single Docker command.
Companies also are investing more and more effort into improving development in local and remote Docker containers, which comes with a lot of advantages as well.
You can get the basic information about your Docker configuration by executing:
The output contains information about your storage driver and your docker root directory.
The storage location of Docker images and containers
A Docker container consists of network settings, volumes, and images. The location of Docker files depends on your operating system. Here is an overview for the most used operating systems:
- Ubuntu:
/var/lib/docker/
- Fedora:
/var/lib/docker/
- Debian:
/var/lib/docker/
- Windows:
C:ProgramDataDockerDesktop
- MacOS:
~/Library/Containers/com.docker.docker/Data/vms/0/
In macOS and Windows, Docker runs Linux containers in a virtual environment. Therefore, there are some additional things to know.
Docker for Mac
Docker is not natively compatible with macOS, so Hyperkit is used to run a virtual image. Its virtual image data is located in:
~/Library/Containers/com.docker.docker/Data/vms/0
Within the virtual image, the path is the default Docker path /var/lib/docker
.
You can investigate your Docker root directory by creating a shell in the virtual environment:
You can kill this session by pressing Ctrl+a, followed by pressing k and y.
Docker for Windows
On Windows, Docker is a bit fractioned. There are native Windows containers that work similarly to Linux containers. Linux containers are run in a minimal Hyper-V based virtual environment.
The configuration and the virtual image to execute linux images are saved in the default Docker root folder.
C:ProgramDataDockerDesktop
If you inspect regular images then you will get linux paths like:
You can connect to the virtual image by:
There, you can go to the referenced location:
The internal structure of the Docker root folder
Inside /var/lib/docker
, different information is stored. For example, data for containers, volumes, builds, networks, and clusters.
Docker images
The heaviest contents are usually images. If you use the default storage driver overlay2, then your Docker images are stored in /var/lib/docker/overlay2
. There, you can find different files that represent read-only layers of a Docker image and a layer on top of it that contains your changes.
Let’s explore the content by using an example:
The LowerDir contains the read-only layers of an image. The read-write layer that represents changes are part of the UpperDir. In my case, the NGINX UpperDir folder contains the log files:
The MergedDir represents the result of the UpperDir and LowerDir that is used by Docker to run the container. The WorkDir is an internal directory for overlay2 and should be empty.
Docker Volumes
It is possible to add a persistent store to containers to keep data longer than the container exists or to share the volume with the host or with other containers. A container can be started with a volume by using the -v option:
We can get information about the connected volume location by:
The referenced directory contains files from the location /var/log
of the NGINX container.
Clean up space used by Docker
It is recommended to use the Docker command to clean up unused containers. Container, networks, images, and the build cache can be cleaned up by executing:
Additionally, you can also remove unused volumes by executing:
Summary
Docker is an important part of many people’s environments and tooling. Sometimes, Docker feels a bit like magic by solving issues in a very smart way without telling the user how things are done behind the scenes. Still, Docker is a regular tool that stores its heavy parts in locations that can be opened and changed.
Sometimes, storage can fill up quickly. Therefore, it’s useful to inspect its root folder, but it is not recommended to delete or change any files manually. Instead, the prune commands can be used to free up disk space.
I hope you enjoyed the article. If you like it and feel the need for a round of applause, follow me on Twitter. I work at eBay Kleinanzeigen, one of the biggest classified companies globally. By the way, we are hiring!
Happy Docker exploring :)
References
- Docker storagediver documentation
https://docs.docker.com/storage/storagedriver/ - Documentation Overlay filesystem
https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt