This piece of code is tested with only a small number of use-cases yet. You are invited to contribute.
compose-dump let’s you backup and (not yet) restore Docker Compose
docker-compose this tool operates usually in a project-
folder. It’s intended to be a simple tool for usage within a broader
backup-logic. The extent of a backup can be controlled by content scopes and
- Archives Docker Compose projects.
- Optionally include configuration with all its referenced files.
- Optionally include volumes of specified services.
- Store dumps in a directory, as archive on disk or as archive to the standard output.
See planned features below.
With pipsi (recommended to avoid library version conflicts):
$ pipsi install compose-dump
Or with pip:
$ pip install compose-dump
To install an editable development instance:
$ cd compose-dump $ pipsi install -e . # or use `pip`
Fully dump a compose-project from
$ cd project_path $ compose-dump backup -t /var/backups/compose
Write a gzip-compressed archive to a remote host via ssh:
$ cd project_path $ compose-dump backup -x gz | ssh user@host "cat - > ~/backup.tar.gz"
Only dump configuration and data from container-volumes of the service
$ compose-dump backup --config --volumes web
Backup all projects with a
$ find . -name "docker-compose.yml" -type f -execdir compose-dump backup -t /var/backups/compose \;
Command line reference:
$ compose-dump $ compose-dump backup --help
Any data that is located outside the project’s scope is ignored. In
particular this are mounted volumes that are not in the project-path or below
and volumes in
volumes_from-referenced containers. Consider this not as
a limitation, but as a feature to endorse good practices; handle these
objects in your broader backup-logic.
The resulting dump is structured this way:
+ <hostname>_<project_name>__<shorted_path_hash>___<date>_<time> # that's the default - Manifest.yml + config - <config_files>… # Usually docker-compose.yml and its referenced files - <build_contexts>… + volumes + mounted <host path relative to project path>… + project <project volume in a tar archive>… + services + <service>… <service volume in a tar archive>…
The integration tests require a docker client on the test machine. To
keep the temporary directories that contain integration tests’ results,
pytest with the
Before opening a pull request, make sure you run tests against all supported
You are free to hate me for relying mainly on integration tests. But keep it to yourself, the world’s already filled up with hatred. I suggest anyone with such sentiment uses this dark energy to implement improvements.
The code may seem cumbersome when it comes to paths. This is caused by
anticipation of the file system path protocol that comes with
Python 3.6 and later. The rule of thumb here is: Always use
pathlib.Path objects to represent paths, convert values for function
str(), convert results to
Path instances. Until 3.6’s
reign has come.
TODO / Known issues / Caveats¶
- make use of compose config hashes
- docs: point to environment variables regarding tls config
- make use of mypy
You may run into issues if a volume’s archive delivered by the Docker daemon is larger than the available memory. Thus you should avoid such scenarios on production systems. This does not apply for mounted volumes. If you can’t avoid such cases, please open an issue.
- test volumes defined in extended services
- filter volumes
- only pause actually affected services
- backup-configuration from a file in a project’s folder
- respect .dockerignore
- read config from stdin
- implement an automated restoration of a project-dump
- read from stdin