Skip to content

build

Performs a docker build, using a Dockerfile to build the application and tags the resulting image. By following the conventions no additional flags are needed, but the following flags are available:

Flag Description
--file,-f <path to Dockerfile> Used to override the default Dockerfile location (which is $PWD), or - to read from `stdin
--no-login Disables login to docker registry (good for local testing)
--no-pull Disables pulling of remote images if they already exist (good for local testing)
--build-arg key=value Additional Docker build-arg
--platform value Specify target platform(s) for multi-arch builds. Single platform: --platform linux/amd64 or multiple platforms: --platform linux/amd64,linux/arm64. Multi-platform builds are pushed directly to registry.
$ build --file docker/Dockerfile.build --skip-login --build-arg AUTH_TOKEN=abc

Build-args

The following build-arg are automatically made available:

Arg Value
CI_COMMIT The commit being built as exposed by CI
CI_BRANCH The branch being built as exposed by CI

they can be used in a Dockerfile like:

FROM ubuntu
ARG CI_BRANCH

RUN echo "Building $CI_BRANCH"

Export content from build

Buildtools build command support exporting content from the actual docker build process, see Custom build outputs. By specifying a special stage in the Dockerfile and name it export you can use the COPY directive to copy files from the build context to the local machine. The copied files will be placed in a folder exported

Example

Consider a Dockerfile like this:

FROM debian as build
RUN echo "text to be copied to localhost" >> /testfile

# -- export stage
FROM scratch as export
# Copies the file /testfile from `build` stage to localhost
COPY --from=build  /testfile .

# -- resulting image stage
FROM scratch
# Do other stuff

Let's try it:

$ ls
Dockerfile

$ cat Dockerfile
FROM debian as build
RUN echo "text to be copied to localhost" >> /testfile

# -- export stage
FROM scratch as export
# Copies the file /testfile from `build` stage to localhost
COPY --from=build  /testfile .

# -- resulting image stage
FROM scratch
# Do other stuff
$ build
... <build output>
$ ls
Dockerfile  exported

$ ls exported
testfile

$ cat exported/testfile
text to be copied to localhost

Multi-platform builds

Build-tools supports building Docker images for multiple platforms (architectures) simultaneously using buildkit's native multi-platform support. This is useful for creating images that can run on different architectures like AMD64, ARM64, ARM/v7, etc.

Basic usage

To build for multiple platforms, provide a comma-separated list of platform identifiers:

$ build --platform linux/amd64,linux/arm64

Common platforms: - linux/amd64 - 64-bit x86 (Intel/AMD) - linux/arm64 - 64-bit ARM (Apple Silicon, ARM servers) - linux/arm/v7 - 32-bit ARM (Raspberry Pi 3+, older ARM devices) - linux/arm/v6 - 32-bit ARM (Raspberry Pi 1/2, older ARM devices)

How it works

Multi-platform builds: 1. Build the image for all specified platforms in parallel using buildkit 2. Create a manifest list that references all platform-specific images 3. Push directly to the configured registry (multi-platform manifests cannot be loaded to local Docker daemon) 4. Tag all platform images with the same tags (commit, branch, latest if applicable)

Important notes: - Multi-platform builds require buildkit (Docker 19.03+) - Images are automatically pushed to the registry during the build process - You may need QEMU for cross-platform emulation if building on a single architecture - Multi-platform builds are typically slower than single-platform builds

Example

# Build for AMD64 and ARM64
$ build --platform linux/amd64,linux/arm64

# The built images will be pushed to the registry with manifest list support
# Clients pulling the image will automatically get the correct architecture

Requirements

Option 1: Use a standalone BuildKit instance (recommended)

Set the BUILDKIT_HOST environment variable to connect directly to a buildkit instance:

# Example: connect to buildkit running in a container
export BUILDKIT_HOST=docker-container://buildkitd

# Example: connect to buildkit via TCP
export BUILDKIT_HOST=tcp://localhost:1234

# Example: connect to buildkit via Unix socket
export BUILDKIT_HOST=unix:///run/buildkit/buildkitd.sock

You can run a standalone buildkit container:

docker run -d --name buildkitd --privileged moby/buildkit:latest

BUILDKIT_HOST behavior

When BUILDKIT_HOST is set, all builds (single-platform and multi-platform) use the buildkit client directly. Images are pushed to the registry during the build, so the push command becomes a no-op.

Option 2: Enable containerd snapshotter in Docker

If not using BUILDKIT_HOST, multi-platform builds require Docker to be configured with the containerd snapshotter. This is because Docker's default storage driver doesn't support the image exporter needed for multi-platform manifest lists.

Enable it by adding to /etc/docker/daemon.json:

{
  "features": {
    "containerd-snapshotter": true
  }
}

Then restart Docker:

sudo systemctl restart docker

QEMU for Cross-Platform Emulation:

For cross-platform builds (e.g., building ARM on x86), you may need to set up QEMU:

docker run --privileged --rm tonistiigi/binfmt --install all

This is typically pre-configured in most CI/CD environments (GitHub Actions, GitLab CI, etc.).

Layer caching with ECR

When using buildkit (via BUILDKIT_HOST), you can configure AWS ECR as a remote layer cache backend. This significantly speeds up builds by caching intermediate layers in an ECR repository.

Configuration

Add the following to your .buildtools.yaml:

cache:
  ecr:
    url: 123456789.dkr.ecr.us-east-1.amazonaws.com/my-cache-repo
    tag: buildcache  # optional, defaults to "buildcache"

Or use environment variables:

export BUILDTOOLS_CACHE_ECR_URL=123456789.dkr.ecr.us-east-1.amazonaws.com/my-cache-repo
export BUILDTOOLS_CACHE_ECR_TAG=buildcache  # optional

How it works

When ECR cache is configured:

  1. Cache import: Before building, buildkit attempts to pull cached layers from the ECR repository
  2. Cache export: After building, buildkit pushes all cached layers to the ECR repository using mode=max (caches all stages)
  3. ECR-compatible format: Uses image-manifest=true and oci-mediatypes=true for ECR compatibility

Requirements

  • BUILDKIT_HOST must be set (ECR cache only works with buildkit client)
  • The ECR repository for cache must exist before running the build
  • AWS credentials must have access to both the image registry and cache registry

Example

# Create the cache repository in ECR (one-time setup)
aws ecr create-repository --repository-name my-cache-repo

# Configure buildkit and run the build
export BUILDKIT_HOST=docker-container://buildkitd
export BUILDTOOLS_CACHE_ECR_URL=123456789.dkr.ecr.us-east-1.amazonaws.com/my-cache-repo

build --platform linux/amd64,linux/arm64

Separate cache repository

It's recommended to use a dedicated ECR repository for cache storage, separate from your image repositories. This allows you to apply different lifecycle policies and keeps your cache isolated.