ECR Plugin

Use the Docker plugin to build and push Docker images to an AWS Elastic Container Registry. The following parameters are used to configure this plugin:

  • access_key - authenticates with this username
  • secret_key - authenticates with this password
  • region - authenticates with this email
  • repo - repository name for the image
  • tag - repository tag for the image
  • force_tag - replace existing matched image tags
  • create_repository - automatically create repository in ECR
  • mirror - use a mirror registry instead of pulling images directly from the central Hub
  • bip - use for pass bridge ip
  • storage_driver - use aufs, devicemapper, btrfs or overlay driver
  • save - save image layers to the specified tar file (see docker save)
    • destination - absolute / relative destination path
    • tag - cherry-pick tags to save (optional)
  • load - restore image layers from the specified tar file
  • build_args - build arguments to pass to docker build

The following is a sample Docker configuration in your .drone.yml file:

publish:
  ecr:
    access_key: MyAWSAccessKey
    secret_key: MyAWSSecretKey
    region: us-east-1
    repo: foo/bar
    tag: latest
    file: Dockerfile

You may want to dynamically tag your image. Use the $$BRANCH, $$COMMIT and $$BUILD_NUMBER variables to tag your image with the branch, commit sha or build number:

publish:
  ecr:
    access_key: MyAWSAccessKey
    secret_key: MyAWSSecretKey
    region: us-east-1
    repo: foo/bar
    tag: $$BRANCH

Or you may prefer to build an image with multiple tags:

publish:
  ecr:
    access_key: MyAWSAccessKey
    secret_key: MyAWSSecretKey
    region: us-east-1
    repo: foo/bar
    tag:
      - latest
      - "1.0.1"
      - "1.0"

Note that in the above example we quote the version numbers. If the yaml parser interprets the value as a number it will cause a parsing error.

It’s also possible to pass build arguments to docker:

publish:
  ecr:
    access_key: MyAWSAccessKey
    secret_key: MyAWSSecretKey
    region: us-east-1
    repo: foo/bar
    build_args:
      - HTTP_PROXY=http://yourproxy.com

Layer Caching

The Drone build environment is, by default, ephemeral meaning that you layers are not saved between builds. The below example combines Drone’s caching feature and Docker’s save and load capabilities to cache and restore image layers between builds:

publish:
  erc:
    access_key: MyAWSAccessKey
    secret_key: MyAWSSecretKey
    region: us-east-1
    repo: foo/bar
    tag:
      - latest
      - "1.0.1"
    load: docker/image.tar
    save:
      destination: docker/image.tar
      tag: latest

cache:
  mount:
    - docker/image.tar

You might also want to create a .dockerignore file in your repo to exclude image.tar from Docker build context:

docker/*

In some cases caching will greatly improve build performance, however, the tradeoff is that caching Docker image layers may consume very large amounts of disk space.

Troubleshooting

For detailed output you can set the DOCKER_LAUNCH_DEBUG environment variable in your plugin configuration. This starts Docker with verbose logging enabled.

publish:
  docker:
    environment:
      - DOCKER_LAUNCH_DEBUG=true

Known Issues

There are known issues when attempting to run this plugin on CentOS, RedHat, and Linux installations that do not have a supported storage driver installed. You can check by running docker info | grep 'Storage Driver:' on your host machine. If the storage driver is not aufs or overlay you will need to re-configure your host machine.

This error occurs when trying to use the default aufs storage Driver but aufs is not installed:

level=fatal msg="Error starting daemon: error initializing graphdriver: driver not supported

This error occurs when trying to use the overlay storage Driver but overlay is not installed:

level=error msg="'overlay' not found as a supported filesystem on this host.
Please ensure kernel is new enough and has overlay support loaded."
level=fatal msg="Error starting daemon: error initializing graphdriver: driver not supported"

This error occurs when using CentOS or RedHat which default to the devicemapper storage driver:

level=error msg="There are no more loopback devices available."
level=fatal msg="Error starting daemon: error initializing graphdriver: loopback mounting failed"
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?

The above issue can be resolved by setting storage_driver: vfs in the .drone.yml file. This may work, but will have very poor performance as discussed here.

This website is a public GitHub repository, which is forked from upstream Drone CI documentation. Please help us by forking and improving upstream Drone CI documentation or Tea CI documentation.