This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

Shipwright is an extensible framework for building container images on Kubernetes.

Shipwright supports popular tools such as Kaniko, Cloud Native Buildpacks, Buildah, and more!

Shipwright is based around four elements for each build:

  1. Source code - the “what” you are trying to build
  2. Output image - “where” you are trying to deliver your application
  3. Build strategy - “how” your application is assembled
  4. Invocation - “when” you want to build your application

Comparison with local image builds

Developers who use Docker are familiar with this process:

  1. Clone source from a git-based repository (“what”)
  2. Build the container image (“when” and “how”)
docker build -t registry.mycompany.com/myorg/myapp:latest .
  1. Push the container image to your registry (“where”)
docker push registry.mycompany.com/myorg/myapp:latest

Shipwright Build APIs

Shipwright’s Build API consists of four core CustomResourceDefinitions (CRDs):

  1. Build - defines what to build, and where the application should be delivered.
  2. BuildStrategy and ClusterBuildStrategy - defines how to build an application for an image building tool.
  3. BuildRun - invokes the build. You create a BuildRun to tell Shipwright to start building your application.

Build

The Build object provides a playbook on how to assemble your specific application. The simplest build consists of a git source, a build strategy, and an output image:

apiVersion: build.dev/v1alpha1
kind: Build
metadata:
  name: kaniko-golang-build
  annotations:
    build.build.dev/build-run-deletion: "true"
spec:
  source:
    url: https://github.com/sbose78/taxi
  strategy:
    name: kaniko
    kind: ClusterBuildStrategy
  output:
    image: registry.mycompany.com/my-org/taxi-app:latest

Builds can be extended to push to private registries, use a different Dockerfile, and more.

BuildStrategy and ClusterBuildStrategy

BuildStrategy and ClusterBuildStrategy are related APIs to define how a given tool should be used to assemble an application. They are distinguished by their scope - BuildStrategy objects are namespace scoped, whereas ClusterBuildStrategy objects are cluster scoped.

The spec of a BuildStrategy or ClusterBuildStrategy consists of a buildSteps object, which look and feel like Kubernetes container specifications. Below is an example spec for Kaniko, which can build an image from a Dockerfile within a container:

# this is a fragment of a manifest
spec:
  buildSteps:
    - name: build-and-push
      image: gcr.io/kaniko-project/executor:v1.3.0
      workingDir: /workspace/source
      securityContext:
        runAsUser: 0
        capabilities:
          add:
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - SETGID
            - SETUID
            - SETFCAP
      env:
        - name: DOCKER_CONFIG
          value: /tekton/home/.docker
      command:
        - /kaniko/executor
      args:
        - --skip-tls-verify=true
        - --dockerfile=$(build.dockerfile)
        - --context=/workspace/source/$(build.source.contextDir)
        - --destination=$(build.output.image)
        - --oci-layout-path=/workspace/output/image
        - --snapshotMode=redo
      resources:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 250m
          memory: 65Mi

BuildRun

Each BuildRun object invokes a build on your cluster. You can think of these as a Kubernetes Jobs or Tekton TaskRuns - they represent a workload on your cluster, ultimately resulting in a running Pod. See BuildRun for more details.

Further reading

1 - Getting Started

1.1 - Installation

Install Shipwright on your Kubernetes cluster.

The Shipwright Build APIs and controllers can be installed directly with our release deployment, or with our operator.

Prerequsites

  • Kubernetes 1.21 or later.

  • Tekton pipelines v0.41 or later.

    kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
    

Installing Shipwright Builds with the Operator

The Shipwright operator is designed to be installed with the Operator Lifecycle Manager (“OLM”). Before installation, ensure that OLM has been deployed on your cluster by following the OLM installation instructions.

Installation

Once OLM has been deployed, use the following command to install the latest operator release from operatorhub.io:

$ kubectl apply -f https://operatorhub.io/install/shipwright-operator.yaml

Usage

To deploy and manage Shipwright Builds in your cluster, first make sure this operator is installed and running.

Next, create the following:

---
apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
  name: shipwright-operator
spec:
  targetNamespace: shipwright-build

The operator will deploy Shipwright Builds in the provided targetNamespace. When .spec.targetNamespace is not set, the namespace will default to shipwright-build. Refer to the ShipwrightBuild documentation for more information about this custom resource.

Installing Shipwright Builds Directly

We also publish a Kubernetes manifest that installs Shipwright directly into the shipwright-build namespace. Applying this manifest requires cluster administrator permissions:

$ kubectl apply -f https://github.com/shipwright-io/build/releases/latest/download/release.yaml --server-side=true

Installing Sample Build Strategies

The Shipwright community maintains a curated set of build strategies for popular build tools. These can be optionally installed after Shipwright Builds has been deployed:

$ kubectl apply -f https://github.com/shipwright-io/build/releases/latest/download/sample-strategies.yaml

2 - Shipwright Builds

2.1 - Build Strategies

Overview

There are two types of strategies, the ClusterBuildStrategy (clusterbuildstrategies.shipwright.io/v1beta1) and the BuildStrategy (buildstrategies.shipwright.io/v1beta1). Both strategies define a shared group of steps, needed to fullfil the application build.

A ClusterBuildStrategy is available cluster-wide, while a BuildStrategy is available within a namespace.

Available ClusterBuildStrategies

Well-known strategies can be bootstrapped from here. The currently supported Cluster BuildStrategy are:

NameSupported platforms
buildahall
multiarch-native-buildahall
BuildKitall
buildpacks-v3-herokulinux/amd64 only
buildpacks-v3linux/amd64 only
kanikoall
koall
source-to-imagelinux/amd64 only

Available BuildStrategies

The current supported namespaces BuildStrategy are:

NameSupported platforms
buildpacks-v3-herokulinux/amd64 only
buildpacks-v3linux/amd64 only

Buildah

The buildah ClusterBuildStrategy uses buildah to build and push a container image, out of a Dockerfile. The Dockerfile should be specified on the Build resource.

The strategy is available in two formats:

Learn more about the differences of shipwright-, or strategy-managed push

Installing Buildah Strategy

To install use:

kubectl apply -f samples/v1beta1/buildstrategy/buildah/buildstrategy_buildah_shipwright_managed_push_cr.yaml
kubectl apply -f samples/v1beta1/buildstrategy/buildah/buildstrategy_buildah_strategy_managed_push_cr.yaml

Multi-arch Native buildah

The multiarch-native-buildah ClusterBuildStrategy uses buildah to build and push a container image, out of a Dockerfile.

The strategy will build the image for the platforms that are listed in the architectures parameter of a Build object that refers it. The strategy will require the cluster to have the necessary infrastructure to run the builds: worker nodes for each architecture that is listed in the architectures parameter.

The ClusterBuildStrategy runs a main orchestrator pod. The orchestrator pod will create one auxiliary job for each architecture requested by the Build. The auxiliary jobs are responsible for building the container image and coordinate with the orchestrator pod.

When all the builds are completed, the orchestrator pod will compose a manifest-list image and push it to the target registry.

The service account that runs the strategy must be bound to a role able to create, list, get and watch batch/v1 jobs and core/v1 pods resources. The role also needs to allow the create verb for the pods/exec resource. Finally, when running in OKD or OpenShift clusters, the service account must be able to use the privileged SecurityContextConstraint.

Installing Multi-arch Native buildah Strategy

To install the cluster-scoped strategy, use:

kubectl apply -f samples/v1beta1/buildstrategy/multiarch-native-buildah/buildstrategy_multiarch_native_buildah_cr.yaml

For each namespace where you want to use the strategy, you also need to apply the RBAC rules that allow the service account to run the strategy. If the service account is named pipeline (default), you can use:

kubectl apply -n <namespace> -f  samples/v1beta1/buildstrategy/multiarch-native-buildah/

Parameters

The build strategy provides the following parameters that you can set in a Build or BuildRun to control its behavior:

ParameterDescriptionDefault
architecturesThe list of architectures to build the image for[ “amd64” ]
build-argsThe values for the args in the Dockerfile. Values must be in the format KEY=VALUE.empty array
dockerfileThe path to the Dockerfile to be used for building the image.Dockerfile
fromImage name used to replace the value in the first FROM instruction in the Dockerfile.empty string
runtime-stage-fromImage name used to replace the value in the last FROM instruction in the Dockerfile.empty string
build-contextsSpecify an additional build context using its short name and its location. Additional build contexts can be referenced in the same manner as we access different stages in COPY instruction. Use values in the form “name=value”. See man buildah-build.empty array
registries-blockA list of registries to block. Images from these registries will not be pulled during the build.empty array
registries-insecureA list of registries that are insecure. Images from these registries will be pulled without verifying the TLS certificate.empty array
registries-searchA list of registries to search for short name images. Images missing the fully-qualified name of a registry will be looked up in these registries.empty array
request-cpuThe CPU request to set for the auxiliary jobs.250m
request-memoryThe memory request to set for the auxiliary jobs.64Mi
limit-cpuThe CPU limit to set for the auxiliary jobs.no limit
limit-memoryThe memory limit to set for the auxiliary jobs.2Gi

Volumes

VolumeDescription
oci-archive-storageVolume to contain the temporary single-arch images manifests in OCI format. It can be set to a persistent volume, e.g., for large images. The default is an emptyDir volume which means that the cached data is discarded at the end of a BuildRun and will make use of ephemeral storage (according to the cluster infrastructure setup).

Example build

---
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: multiarch-native-buildah-ex
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: docker-build
  strategy:
    name: multiarch-native-buildah
    kind: ClusterBuildStrategy
  paramValues:
    - name: architectures
      values:
        # This will require a cluster with both arm64 and amd64 nodes
        - value: "amd64"
        - value: "arm64"
    - name: build-contexts
      values:
        - value: "ghcr.io/shipwright-io/shipwright-samples/golang:1.18=docker://ghcr.io/shipwright-io/shipwright-samples/golang:1.18"
    # The buildah `--from` replaces the first FROM statement
    - name: from
      value: "" # Using the build-contexts for this example
    # The runtime-stage-from implements the logic to replace the last stage FROM image of a Dockerfile
    - name: runtime-stage-from
      value: docker://gcr.io/distroless/static:nonroot
    - name: dockerfile
      value: Dockerfile
  output:
    image: image-registry.openshift-image-registry.svc:5000/build-examples/taxi-app

Buildpacks v3

The buildpacks-v3 BuildStrategy/ClusterBuildStrategy uses a Cloud Native Builder (CNB) container image, and is able to implement lifecycle commands.

Installing Buildpacks v3 Strategy

You can install the BuildStrategy in your namespace or install the ClusterBuildStrategy at cluster scope so that it can be shared across namespaces.

To install the cluster scope strategy, you can choose between the Paketo and Heroku buildpacks family:

# Paketo
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml

# Heroku
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml

To install the namespaced scope strategy, you can choose between the Paketo and Heroku buildpacks family:

# Paketo
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_namespaced_cr.yaml

# Heroku
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml

Kaniko

The kaniko ClusterBuildStrategy is composed by Kaniko’s executor kaniko, with the objective of building a container-image, out of a Dockerfile and context directory.

Installing Kaniko Strategy

To install the cluster scope strategy, use:

kubectl apply -f samples/v1beta1/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml

BuildKit

BuildKit is composed of the buildctl client and the buildkitd daemon. For the buildkit ClusterBuildStrategy, it runs on a daemonless mode, where both client and ephemeral daemon run in a single container. In addition, it runs without privileges (rootless).

Cache Exporters

By default, the buildkit ClusterBuildStrategy will use caching to optimize the build times. When pushing an image to a registry, it will use the inline export cache, which appends cache information to the image that is built. Please refer to export-cache docs for more information. Caching can be disabled by setting the cache parameter to "disabled". See Defining ParamValues for more information.

Build-args and secrets

The sample build strategy contains array parameters to set values for ARGs in your Dockerfile, and for mounts with type=secret. The parameter names are build-args and secrets. Defining ParamValues contains example usage.

Multi-platform builds

The sample build strategy contains a platforms array parameter that you can set to leverage BuildKit’s support to build multi-platform images. If you do not set this value, the image is built for the platform that is supported by the FROM image. If that image supports multiple platforms, then the image will be built for the platform of your Kubernetes node.

Known Limitations

The buildkit ClusterBuildStrategy currently locks the following parameters:

  • To allow running rootless, it requires both AppArmor and SecComp to be disabled using the unconfined profile.

Usage in Clusters with Pod Security Standards

The BuildKit strategy contains fields with regard to security settings. It therefore depends on the respective cluster setup and administrative configuration. These settings are:

  • Defining the unconfined profile for both AppArmor and seccomp as required by the underlying rootlesskit.
  • The allowPrivilegeEscalation settings is set to true to be able to use binaries that have the setuid bit set in order to run with “root” level privileges. In case of BuildKit, this is required by rootlesskit in order to set the user namespace mapping file /proc/<pid>/uid_map.
  • Use of non-root user with UID 1000/GID 1000 as the runAsUser.

These settings have no effect in case Pod Security Standards are not used.

Please note: At this point in time, there is no way to run rootlesskit to start the BuildKit daemon without the allowPrivilegeEscalation flag set to true. Clusters with the Restricted security standard in place will not be able to use this build strategy.

Installing BuildKit Strategy

To install the cluster scope strategy, use:

kubectl apply -f samples/v1beta1/buildstrategy/buildkit/buildstrategy_buildkit_cr.yaml

ko

The ko ClusterBuilderStrategy is using ko’s publish command to build an image from a Golang main package.

Installing ko Strategy

To install the cluster scope strategy, use:

kubectl apply -f samples/v1beta1/buildstrategy/ko/buildstrategy_ko_cr.yaml

Parameters

The build strategy provides the following parameters that you can set in a Build or BuildRun to control its behavior:

ParameterDescriptionDefault
go-flagsValue for the GOFLAGS environment variable.Empty
go-versionVersion of Go, must match a tag from the golang image1.22
ko-versionVersion of ko, must be either latest for the newest release, or a ko release namelatest
package-directoryThe directory inside the context directory containing the main package..
target-platformTarget platform to be built. For example: linux/arm64. Multiple platforms can be provided separated by comma, for example: linux/arm64,linux/amd64. The value all will build all platforms supported by the base image. The value current will build the platform on which the build runs.current

Volumes

VolumeDescription
gocacheVolume to contain the GOCACHE. Can be set to a persistent volume to optimize compilation performance for rebuilds. The default is an emptyDir volume which means that the cached data is discarded at the end of a BuildRun.

Source to Image

This BuildStrategy is composed by source-to-image and kaniko in order to generate a Dockerfile and prepare the application to be built later on with a builder.

s2i requires a specially crafted image, which can be informed as builderImage parameter on the Build resource.

Installing Source to Image Strategy

To install the cluster scope strategy use:

kubectl apply -f samples/v1beta1/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml

Build Steps

  1. s2i in order to generate a Dockerfile and prepare source-code for image build;
  2. kaniko to create and push the container image to what is defined as output.image;

Strategy parameters

Strategy parameters allow users to parameterize their strategy definition, by allowing users to control the parameters values via the Build or BuildRun resources.

Users defining parameters under their strategies require to understand the following:

  • Definition: A list of parameters should be defined under spec.parameters. Each list item should consist of a name, a description, a type (either "array" or "string") and optionally a default value (for type=string), or defaults values (for type=array). If no default(s) are provided, then the user must define a value in the Build or BuildRun.

  • Usage: In order to use a parameter in the strategy steps, use the following syntax for type=string: $(params.your-parameter-name). String parameters can be used in all places in the buildSteps. Some example scenarios are:

    • image: to use a custom tag, for example golang:$(params.go-version) as it is done in the ko sample build strategy
    • args: to pass data into your builder command
    • env: to force a user to provide a value for an environment variable.

    Arrays are referenced using $(params.your-array-parameter-name[*]), and can only be used in as the value for args or command because the defined as arrays by Kubernetes. For every item in the array, an arg will be set. For example, if you specify this in your build strategy step:

    spec:
      parameters:
        - name: tool-args
          description: Parameters for the tool
          type: array
      steps:
        - name: a-step
          command:
            - some-tool
          args:
            - $(params.tool-args[*])
    

    If the build user sets the value of tool-args to ["–some-arg", “some-value”], then the Pod will contain these args:

    spec:
      containers:
        - name: a-step
          args:
          ...
            - --some-arg
            - some-value
    
  • Parameterize: Any Build or BuildRun referencing your strategy, can set a value for your-parameter-name parameter if needed.

Note: Users can provide parameter values as simple strings or as references to keys in ConfigMaps and Secrets. If they use a ConfigMap or Secret, then the value can only be used if the parameter is used in the command, args, or env section of the buildSteps. For example, the above-mentioned scenario to set a step’s image to golang:$(params.go-version) does not allow the usage of ConfigMaps or Secrets.

The following example is from the BuildKit sample build strategy. It defines and uses several parameters:

---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
  name: buildkit
  ...
spec:
  parameters:
  - name: build-args
    description: "The values for the ARGs in the Dockerfile. Values must be in the format KEY=VALUE."
    type: array
    defaults: []
  - name: cache
    description: "Configure BuildKit's cache usage. Allowed values are 'disabled' and 'registry'. The default is 'registry'."
    type: string
    default: registry
  - name: insecure-registry
    type: string
    description: "enables the push to an insecure registry"
    default: "false"
  - name: secrets
    description: "The secrets to pass to the build. Values must be in the format ID=FILE_CONTENT."
    type: array
    defaults: []
  - name: dockerfile
    description: The path to the Dockerfile to be used for building the image.
    type: string
    default: "Dockerfile"
  steps:
    ...
    - name: build-and-push
      image: moby/buildkit:v0.17.0-rootless
      imagePullPolicy: Always
      workingDir: $(params.shp-source-root)
      ...
      command:
        - /bin/ash
      args:
        - -c
        - |
          set -euo pipefail

          # Prepare the file arguments
          DOCKERFILE_PATH='$(params.shp-source-context)/$(params.dockerfile)'
          DOCKERFILE_DIR="$(dirname "${DOCKERFILE_PATH}")"
          DOCKERFILE_NAME="$(basename "${DOCKERFILE_PATH}")"

          # We only have ash here and therefore no bash arrays to help add dynamic arguments (the build-args) to the build command.

          echo "#!/bin/ash" > /tmp/run.sh
          echo "set -euo pipefail" >> /tmp/run.sh
          echo "buildctl-daemonless.sh \\" >> /tmp/run.sh
          echo "build \\" >> /tmp/run.sh
          echo "--progress=plain \\" >> /tmp/run.sh
          echo "--frontend=dockerfile.v0 \\" >> /tmp/run.sh
          echo "--opt=filename=\"${DOCKERFILE_NAME}\" \\" >> /tmp/run.sh
          echo "--local=context='$(params.shp-source-context)' \\" >> /tmp/run.sh
          echo "--local=dockerfile=\"${DOCKERFILE_DIR}\" \\" >> /tmp/run.sh
          echo "--output=type=image,name='$(params.shp-output-image)',push=true,registry.insecure=$(params.insecure-registry) \\" >> /tmp/run.sh
          if [ "$(params.cache)" == "registry" ]; then
            echo "--export-cache=type=inline \\" >> /tmp/run.sh
            echo "--import-cache=type=registry,ref='$(params.shp-output-image)' \\" >> /tmp/run.sh
          elif [ "$(params.cache)" == "disabled" ]; then
            echo "--no-cache \\" >> /tmp/run.sh
          else
            echo -e "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'."
            echo -n "InvalidParameterValue" > '$(results.shp-error-reason.path)'
            echo -n "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'." > '$(results.shp-error-message.path)'
            exit 1
          fi

          stage=""
          for a in "$@"
          do
            if [ "${a}" == "--build-args" ]; then
              stage=build-args
            elif [ "${a}" == "--secrets" ]; then
              stage=secrets
            elif [ "${stage}" == "build-args" ]; then
              echo "--opt=\"build-arg:${a}\" \\" >> /tmp/run.sh
            elif [ "${stage}" == "secrets" ]; then
              # Split ID=FILE_CONTENT into variables id and data

              # using head because the data could be multiline
              id="$(echo "${a}" | head -1 | sed 's/=.*//')"

              # This is hacky, we remove the suffix ${id}= from all lines of the data.
              # If the data would be multiple lines and a line would start with ${id}=
              # then we would remove it. We could force users to give us the secret
              # base64 encoded. But ultimately, the best solution might be if the user
              # mounts the secret and just gives us the path here.
              data="$(echo "${a}" | sed "s/^${id}=//")"

              # Write the secret data into a temporary file, once we have volume support
              # in the build strategy, we should use a memory based emptyDir for this.
              echo -n "${data}" > "/tmp/secret_${id}"

              # Add the secret argument
              echo "--secret id=${id},src="/tmp/secret_${id}" \\" >> /tmp/run.sh
            fi
          done

          echo "--metadata-file /tmp/image-metadata.json" >> /tmp/run.sh

          chmod +x /tmp/run.sh
          /tmp/run.sh

          # Store the image digest
          sed -E 's/.*containerimage.digest":"([^"]*).*/\1/' < /tmp/image-metadata.json > '$(results.shp-image-digest.path)'          
        # That's the separator between the shell script and its args
        - --
        - --build-args
        - $(params.build-args[*])
        - --secrets
        - $(params.secrets[*])

See more information on how to use these parameters in a Build or BuildRun in the related documentation.

System parameters

Contrary to the strategy spec.parameters, you can use system parameters and their values defined at runtime when defining the steps of a build strategy to access system information as well as information provided by the user in their Build or BuildRun. The following parameters are available:

ParameterDescription
$(params.shp-source-root)The absolute path to the directory that contains the user’s sources.
$(params.shp-source-context)The absolute path to the context directory of the user’s sources. If the user specified no value for spec.source.contextDir in their Build, then this value will equal the value for $(params.shp-source-root). Note that this directory is not guaranteed to exist at the time the container for your step is started, you can therefore not use this parameter as a step’s working directory.
$(params.shp-output-directory)The absolute path to a directory that the build strategy should store the image in. You can store a single tarball containing a single image, or an OCI image layout.
$(params.shp-output-image)The URL of the image that the user wants to push, as specified in the Build’s spec.output.image or as an override from the BuildRun’s spec.output.image.
$(params.shp-output-insecure)A flag that indicates the output image’s registry location is insecure because it uses a certificate not signed by a certificate authority, or uses HTTP.

Output directory vs. output image

As a build strategy author, you decide whether your build strategy or Shipwright pushes the build image to the container registry:

  • If you DO NOT use $(params.shp-output-directory), then Shipwright assumes that your build strategy PUSHES the image. We call this a strategy-managed push.
  • If you DO use $(params.shp-output-directory), then Shipwright assumes that your build strategy does NOT PUSH the image. We call this a shipwright-managed push.

When you use the $(params.shp-output-directory) parameter, then Shipwright will also set the image-related system results.

If you are uncertain about how to implement your build strategy, then follow this guidance:

  1. If your build strategy tool cannot locally store an image but always pushes it, then you must do the push operation. An example is the Buildpacks strategy. You SHOULD respect the $(params.shp-output-insecure) parameter.
  2. If your build strategy tool can locally store an image, then the choice depends on how you expect your build users to make use of your strategy, and the nature of your strategy.
    1. Some build strategies do not produce all layers of an image, but use a common base image and put one or more layers on top with the application. An example is ko. Such base image layers are often already present in the destination registry (like in rebuilds). If the strategy can perform the push operation, then it can optimize the process and can omit the download of the base image when it is not required to push it. In the case of a shipwright-managed push, the complete image must be locally stored in $(params.shp-output-directory), which implies that a base image must always be downloaded.
    2. Some build strategy tools do not make it easy to determine the digest or size of the image, which can make it complex for you to set the strategy results. In the case of a shipwright-managed push, Shipwright has the responsibility to set them.
    3. Build users can configure the build to amend additional annotations, or labels to the final image. In the case of a shipwright-managed push, these can be set directly and the image will only be pushed once. In a strategy-managed push scenario, your build strategy will push the first version of the image without those annotations and labels. Shipwright will then mutate the image and push it again with the updated annotations and labels. Such a duplicate push can cause unexpected behavior with registries that trigger other actions when an image gets pushed, or that do not allow overwriting a tag.
    4. The Shipwright maintainers plan to provide more capabilities in the future that need the image locally, such as vulnerability scanning, or software bill of material (SBOM) creation. These capabilities may be only fully supported with shipwright-managed push.

System parameters vs Strategy Parameters Comparison

Parameter TypeUser ConfigurableDefinition
System ParameterNoAt run-time, by the BuildRun controller.
Strategy ParameterYesAt build-time, during the BuildStrategy creation.

Securely referencing string parameters

In build strategy steps, string parameters are referenced using $(params.PARAM_NAME). This applies to system parameters, and those parameters defined in the build strategy. You can reference those parameters at many locations in the build steps, such as environment variables values, arguments, image, and more. In the Pod, all $(params.PARAM_NAME) tokens will be replaced by simple string replaces. This is safe in most locations but requires your attention when you define an inline script using an argument. For example:

spec:
  parameters:
    - name: sample-parameter
      description: A sample parameter
      type: string
  steps:
    - name: sample-step
      command:
        - /bin/bash
      args:
        - -c
        - |
          set -euo pipefail

          some-tool --sample-argument "$(params.sample-parameter)"          

This opens the door to script injection, for example if the user sets the sample-parameter to argument-value" && malicious-command && echo ", the resulting pod argument will look like this:

        - |
          set -euo pipefail

          some-tool --sample-argument "argument-value" && malicious-command && echo ""          

To securely pass a parameter value into a script-style argument, you can choose between these two approaches:

  1. Using environment variables. This is used in some of our sample strategies, for example ko, or buildpacks. Basically, instead of directly using the parameter inside the script, you pass it via environment variable. Using quoting, shells ensure that no command injection is possible:

    spec:
      parameters:
        - name: sample-parameter
          description: A sample parameter
          type: string
      steps:
        - name: sample-step
          env:
            - name: PARAM_SAMPLE_PARAMETER
              value: $(params.sample-parameter)
          command:
            - /bin/bash
          args:
            - -c
            - |
              set -euo pipefail
    
              some-tool --sample-argument "${PARAM_SAMPLE_PARAMETER}"          
    
  2. Using arguments. This is used in some of our sample build strategies, for example buildah. Here, you use arguments to your own inline script. Appropriate shell quoting guards against command injection.

    spec:
      parameters:
        - name: sample-parameter
          description: A sample parameter
          type: string
      steps:
        - name: sample-step
          command:
            - /bin/bash
          args:
            - -c
            - |
              set -euo pipefail
    
              SAMPLE_PARAMETER="$1"
    
              some-tool --sample-argument "${SAMPLE_PARAMETER}"          
            - --
            - $(params.sample-parameter)
    

System results

If you are using a strategy-managed push, see output directory vs output image, you can optionally store the size and digest of the image your build strategy created to a set of files.

Result fileDescription
$(results.shp-image-digest.path)File to store the digest of the image.
$(results.shp-image-size.path)File to store the compressed size of the image.

You can look at sample build strategies, such as Buildpacks, to see how they fill some or all of the results files.

This information will be available in the .status.output section of the BuildRun.

apiVersion: shipwright.io/v1beta1
kind: BuildRun
# [...]
status:
 # [...]
  output:
    digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53
    size: 1989004
  # [...]

Additionally, you can store error details for debugging purposes when a BuildRun fails using your strategy.

Result fileDescription
$(results.shp-error-reason.path)File to store the error reason.
$(results.shp-error-message.path)File to store the error message.

Reason is intended to be a one-word CamelCase classification of the error source, with the first letter capitalized. Error details are only propagated if the build container terminates with a non-zero exit code. This information will be available in the .status.failureDetails section of the BuildRun.

apiVersion: shipwright.io/v1beta1
kind: BuildRun
# [...]
status:
  # [...]
  failureDetails:
    location:
      container: step-source-default
      pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr
    message: The source repository does not exist, or you have insufficient permission
      to access it.
    reason: GitRemotePrivate

Security Contexts

In a build strategy, it is recommended that you define a securityContext with a runAsUser and runAsGroup:

spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 1000

This runAs configuration will be used for all shipwright-managed steps such as the step that retrieves the source code, and for the steps you define in the build strategy. This configuration ensures that all steps share the same runAs configuration which eliminates file permission problems.

Without a securityContext for the build strategy, shipwright-managed steps will run with the runAsUser and runAsGroup that is defined in the configuration’s container templates that is potentially a different user than you use in your build strategy. This can result in issues when for example source code is downloaded as user A as defined by the Git container template, but your strategy accesses it as user B.

In build strategy steps you can define a step-specific securityContext that matches Kubernetes’ security context where you can configure other security aspects such as capabilities or privileged containers.

Steps Resource Definition

All strategies steps can include a definition of resources(limits and requests) for CPU, memory and disk. For strategies with more than one step, each step(container) could require more resources than others. Strategy admins are free to define the values that they consider the best fit for each step. Also, identical strategies with the same steps that are only different in their name and step resources can be installed on the cluster to allow users to create a build with smaller and larger resource requirements.

Strategies with different resources

If the strategy admins required to have multiple flavours of the same strategy, where one strategy has more resources that the other. Then, multiple strategies for the same type should be defined on the cluster. In the following example, we use Kaniko as the type:

---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
  name: kaniko-small
spec:
  steps:
    - name: build-and-push
      image: gcr.io/kaniko-project/executor:v1.23.2
      workingDir: $(params.shp-source-root)
      securityContext:
        runAsUser: 0
        capabilities:
          add:
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - SETGID
            - SETUID
            - SETFCAP
            - KILL
      env:
        - name: DOCKER_CONFIG
          value: /tekton/home/.docker
        - name: AWS_ACCESS_KEY_ID
          value: NOT_SET
        - name: AWS_SECRET_KEY
          value: NOT_SET
      command:
        - /kaniko/executor
      args:
        - --skip-tls-verify=true
        - --dockerfile=$(params.dockerfile)
        - --context=$(params.shp-source-context)
        - --destination=$(params.shp-output-image)
        - --snapshot-mode=redo
        - --push-retry=3
      resources:
        limits:
          cpu: 250m
          memory: 65Mi
        requests:
          cpu: 250m
          memory: 65Mi
  parameters:
  - name: dockerfile
    description: The path to the Dockerfile to be used for building the image.
    type: string
    default: "Dockerfile"
---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
  name: kaniko-medium
spec:
  steps:
    - name: build-and-push
      image: gcr.io/kaniko-project/executor:v1.23.2
      workingDir: $(params.shp-source-root)
      securityContext:
        runAsUser: 0
        capabilities:
          add:
            - CHOWN
            - DAC_OVERRIDE
            - FOWNER
            - SETGID
            - SETUID
            - SETFCAP
            - KILL
      env:
        - name: DOCKER_CONFIG
          value: /tekton/home/.docker
        - name: AWS_ACCESS_KEY_ID
          value: NOT_SET
        - name: AWS_SECRET_KEY
          value: NOT_SET
      command:
        - /kaniko/executor
      args:
        - --skip-tls-verify=true
        - --dockerfile=$(params.dockerfile)
        - --context=$(params.shp-source-context)
        - --destination=$(params.shp-output-image)
        - --snapshot-mode=redo
        - --push-retry=3
      resources:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 500m
          memory: 1Gi
  parameters:
  - name: dockerfile
    description: The path to the Dockerfile to be used for building the image.
    type: string
    default: "Dockerfile"

The above provides more control and flexibility for the strategy admins. For end-users, all they need to do, is to reference the proper strategy. For example:

---
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: kaniko-medium
spec:
  source:
    git:  
      url: https://github.com/shipwright-io/sample-go
    contextDir: docker-build
  strategy:
    name: kaniko
    kind: ClusterBuildStrategy
  paramValues:
  - name: dockerfile
    value: Dockerfile

How does Tekton Pipelines handle resources

The Build controller relies on the Tekton pipeline controller to schedule the pods that execute the above strategy steps. In a nutshell, the Build controller creates on run-time a Tekton TaskRun, and the TaskRun generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one.

Tekton manage each step resources request in a very particular way, see the docs. From this document, it mentions the following:

The CPU, memory, and ephemeral storage resource requests will be set to zero, or, if specified, the minimums set through LimitRanges in that Namespace, if the container image does not have the largest resource request out of all container images in the Task. This ensures that the Pod that executes the Task only requests enough resources to run a single container image in the Task rather than hoard resources for all container images in the Task at once.

Examples of Tekton resources management

For a more concrete example, let´s take a look on the following scenarios:


Scenario 1. Namespace without LimitRange, both steps with the same resource values.

If we apply the following resources:

We will see some differences between the TaskRun definition and the pod definition.

For the TaskRun, as expected we can see the resources on each step, as we previously define on our strategy.

$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "250m",
    "memory": "65Mi"
  }
}

$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "250m",
    "memory": "65Mi"
  }
}

The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:

$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "250m",
    "ephemeral-storage": "0",
    "memory": "65Mi"
  }
}

$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "0",               <------------------- See how the request is set to ZERO.
    "ephemeral-storage": "0", <------------------- See how the request is set to ZERO.
    "memory": "0"             <------------------- See how the request is set to ZERO.
  }
}

In this scenario, only one container can have the spec.resources.requests definition. Even when both steps have the same values, only one container will get them, the others will be set to zero.


Scenario 2. Namespace without LimitRange, steps with different resources:

If we apply the following resources:

  • buildahBuild

  • buildahBuildRun

  • We will use a modified buildah strategy, with the following steps resources:

      - name: buildah-bud
        image: quay.io/containers/buildah:v1.37.5
        workingDir: $(params.shp-source-root)
        securityContext:
          privileged: true
        command:
          - /usr/bin/buildah
        args:
          - bud
          - --tag=$(params.shp-output-image)
          - --file=$(params.dockerfile)
          - $(build.source.contextDir)
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 65Mi
        volumeMounts:
          - name: buildah-images
            mountPath: /var/lib/containers/storage
      - name: buildah-push
        image: quay.io/containers/buildah:v1.37.5
        securityContext:
          privileged: true
        command:
          - /usr/bin/buildah
        args:
          - push
          - --tls-verify=false
          - docker://$(params.shp-output-image)
        resources:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 100Mi  <------ See how we provide more memory to step-buildah-push, compared to the 65Mi of the other step
    

For the TaskRun, as expected we can see the resources on each step.

$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "250m",
    "memory": "65Mi"
  }
}

$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "250m",
    "memory": "100Mi"
  }
}

The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:

$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "250m",                <------------------- See how the CPU is preserved
    "ephemeral-storage": "0",
    "memory": "0"                 <------------------- See how the memory is set to ZERO
  }
}
$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'
{
  "limits": {
    "cpu": "500m",
    "memory": "1Gi"
  },
  "requests": {
    "cpu": "0",                     <------------------- See how the CPU is set to zero.
    "ephemeral-storage": "0",
    "memory": "100Mi"               <------------------- See how the memory is preserved on this container
  }
}

In the above scenario, we can see how the maximum numbers for resource requests are distributed between containers. The container step-buildah-push gets the 100mi for the memory requests, while it was the one defining the highest number. At the same time, the container step-buildah-bud is assigned a 0 for its memory request.


Scenario 3. Namespace with a LimitRange.

When a LimitRange exists on the namespace, Tekton Pipeline controller will do the same approach as stated in the above two scenarios. The difference is that for the containers that have lower values, instead of zero, they will get the minimum values of the LimitRange.

Annotations

Annotations can be defined for a BuildStrategy/ClusterBuildStrategy as for any other Kubernetes object. Annotations are propagated to the TaskRun and from there, Tekton propagates them to the Pod. Use cases for this are for example:

  • The Kubernetes Network Traffic Shaping feature looks for the kubernetes.io/ingress-bandwidth and kubernetes.io/egress-bandwidth annotations to limit the network bandwidth the Pod is allowed to use.
  • The AppArmor profile of a container is defined using the container.apparmor.security.beta.kubernetes.io/<container_name> annotation.

The following annotations are not propagated:

  • kubectl.kubernetes.io/last-applied-configuration
  • clusterbuildstrategy.shipwright.io/*
  • buildstrategy.shipwright.io/*
  • build.shipwright.io/*
  • buildrun.shipwright.io/*

A Kubernetes administrator can further restrict the usage of annotations by using policy engines like Open Policy Agent.

Volumes and VolumeMounts

Build Strategies can declare volumes. These volumes can be referred to by the build steps using volumeMount. Volumes in Build Strategy follow the declaration of Pod Volumes, so all the usual volumeSource types are supported.

Volumes can be overridden by Builds and BuildRuns, so Build Strategies’ volumes support an overridable flag, which is a boolean, and is false by default. In case volume is not overridable, Build or BuildRun that tries to override it, will fail.

Build steps can declare a volumeMount, which allows them to access volumes defined by BuildStrategy, Build or BuildRun.

Here is an example of BuildStrategy object that defines volumes and volumeMounts:

apiVersion: shipwright.io/v1beta1
kind: BuildStrategy
metadata:
  name: buildah
spec:
  steps:
    - name: build
      image: quay.io/containers/buildah:v1.27.0
      workingDir: $(params.shp-source-root)
      command:
        - buildah
        - bud
        - --tls-verify=false
        - --layers
        - -f
        - $(params.dockerfile)
        - -t
        - $(params.shp-output-image)
        - $(params.shp-source-context)
      volumeMounts:
        - name: varlibcontainers
          mountPath: /var/lib/containers
  volumes:
    - name: varlibcontainers
      overridable: true
      emptyDir: {}
  # ...

2.2 - Build

Overview

A Build resource allows the user to define:

  • source
  • trigger
  • strategy
  • paramValues
  • output
  • timeout
  • env
  • retention
  • volumes
  • nodeSelector

A Build is available within a namespace.

Build Controller

The controller watches for:

  • Updates on the Build resource (CRD instance)

When the controller reconciles it:

  • Validates if the referenced Strategy exists.
  • Validates if the specified paramValues exist on the referenced strategy parameters. It also validates if the paramValues names collide with the Shipwright reserved names.
  • Validates if the container registry output secret exists.
  • Validates if the referenced spec.source.git.url endpoint exists.

Build Validations

Note: reported validations in build status are deprecated, and will be removed in a future release.

To prevent users from triggering BuildRuns (execution of a Build) that will eventually fail because of wrong or missing dependencies or configuration settings, the Build controller will validate them in advance. If all validations are successful, users can expect a Succeeded status.reason. However, if any validations fail, users can rely on the status.reason and status.message fields to understand the root cause.

Status.ReasonDescription
BuildStrategyNotFoundThe referenced namespace-scope strategy doesn’t exist.
ClusterBuildStrategyNotFoundThe referenced cluster-scope strategy doesn’t exist.
SetOwnerReferenceFailedSetting ownerreferences between a Build and a BuildRun failed. This status is triggered when you set the spec.retention.atBuildDeletion to true in a Build.
SpecSourceSecretRefNotFoundThe secret used to authenticate to git doesn’t exist.
SpecOutputSecretRefNotFoundThe secret used to authenticate to the container registry doesn’t exist.
SpecBuilderSecretRefNotFoundThe secret used to authenticate the container registry doesn’t exist.
MultipleSecretRefNotFoundMore than one secret is missing. At the moment, only three paths on a Build can specify a secret.
RestrictedParametersInUseOne or many defined paramValues are colliding with Shipwright reserved parameters. See Defining Params for more information.
UndefinedParameterOne or many defined paramValues are not defined in the referenced strategy. Please ensure that the strategy defines them under its spec.parameters list.
RemoteRepositoryUnreachableThe defined spec.source.git.url was not found. This validation only takes place for HTTP/HTTPS protocols.
BuildNameInvalidThe defined Build name (metadata.name) is invalid. The Build name should be a valid label value.
SpecEnvNameCanNotBeBlankThe name for a user-provided environment variable is blank.
SpecEnvValueCanNotBeBlankThe value for a user-provided environment variable is blank.
SpecEnvOnlyOneOfValueOrValueFromMustBeSpecifiedBoth value and valueFrom were specified, which are mutually exclusive.
RuntimePathsCanNotBeEmptyThe spec.runtime feature is used but the paths were not specified.
WrongParameterValueTypeA single value was provided for an array parameter, or vice-versa.
InconsistentParameterValuesParameter values have more than one of configMapValue, secretValue, or value set.
EmptyArrayItemParameterValuesArray parameters contain an item where none of configMapValue, secretValue, or value is set.
IncompleteConfigMapValueParameterValuesA configMapValue is specified where the name or the key is empty.
IncompleteSecretValueParameterValuesA secretValue is specified where the name or the key is empty.
VolumeDoesNotExistVolume referenced by the Build does not exist, therefore Build cannot be run.
VolumeNotOverridableVolume defined by build is not set as overridable in the strategy.
UndefinedVolumeVolume defined by build is not found in the strategy.
TriggerNameCanNotBeBlankTrigger condition does not have a name.
TriggerInvalidTypeTrigger type is invalid.
TriggerInvalidGitHubWebHookTrigger type GitHub is invalid.
TriggerInvalidImageTrigger type Image is invalid.
TriggerInvalidPipelineTrigger type Pipeline is invalid.
OutputTimestampNotSupportedAn unsupported output timestamp setting was used.
OutputTimestampNotValidThe output timestamp value is not valid.

Configuring a Build

The Build definition supports the following fields:

  • Required:

    • apiVersion - Specifies the API version, for example shipwright.io/v1beta1.
    • kind - Specifies the Kind type, for example Build.
    • metadata - Metadata that identify the custom resource instance, especially the name of the Build, and in which namespace you place it. Note: You should use your own namespace, and not put your builds into the shipwright-build namespace where Shipwright’s system components run.
    • spec.source - Refers to the location of the source code, for example a Git repository or OCI artifact image.
    • spec.strategy - Refers to the BuildStrategy to be used, see the examples
    • spec.output- Refers to the location where the generated image would be pushed.
    • spec.output.pushSecret- Reference an existing secret to get access to the container registry.
  • Optional:

    • spec.paramValues - Refers to a name-value(s) list to specify values for parameters defined in the BuildStrategy.
    • spec.timeout - Defines a custom timeout. The value needs to be parsable by ParseDuration, for example, 5m. The default is ten minutes. You can overwrite the value in the BuildRun.
    • spec.output.annotations - Refers to a list of key/value that could be used to annotate the output image.
    • spec.output.labels - Refers to a list of key/value that could be used to label the output image.
    • spec.output.timestamp - Instruct the build to change the output image creation timestamp to the specified value. When omitted, the respective build strategy tool defines the output image timestamp.
      • Use string Zero to set the image timestamp to UNIX epoch timestamp zero.
      • Use string SourceTimestamp to set the image timestamp to the source timestamp, i.e. the timestamp of the Git commit that was used.
      • Use string BuildTimestamp to set the image timestamp to the timestamp of the build run.
      • Use any valid UNIX epoch seconds number as a string to set this as the image timestamp.
    • spec.output.vulnerabilityScan to enable a security vulnerability scan for your generated image. Further options in vulnerability scanning are defined here
    • spec.env - Specifies additional environment variables that should be passed to the build container. The available variables depend on the tool that is being used by the chosen build strategy.
    • spec.retention.atBuildDeletion - Defines if all related BuildRuns needs to be deleted when deleting the Build. The default is false.
    • spec.retention.ttlAfterFailed - Specifies the duration for which a failed buildrun can exist.
    • spec.retention.ttlAfterSucceeded - Specifies the duration for which a successful buildrun can exist.
    • spec.retention.failedLimit - Specifies the number of failed buildrun that can exist.
    • spec.retention.succeededLimit - Specifies the number of successful buildrun can exist.
    • spec.nodeSelector - Specifies a selector which must match a node’s labels for the build pod to be scheduled on that node.

Defining the Source

A Build resource can specify a source type, such as a Git repository or an OCI artifact, together with other parameters like:

  • source.type - Specify the type of the data-source. Currently, the supported types are “Git”, “OCIArtifact”, and “Local”.
  • source.git.url - Specify the source location using a Git repository.
  • source.git.cloneSecret - For private repositories or registries, the name references a secret in the namespace that contains the SSH private key or Docker access credentials, respectively.
  • source.git.revision - A specific revision to select from the source repository, this can be a commit, tag or branch name. If not defined, it will fall back to the Git repository default branch.
  • source.contextDir - For repositories where the source code is not located at the root folder, you can specify this path here.

By default, the Build controller does not validate that the Git repository exists. If the validation is desired, users can explicitly define the build.shipwright.io/verify.repository annotation with true. For example:

Example of a Build with the build.shipwright.io/verify.repository annotation to enable the spec.source.git.url validation.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
  annotations:
    build.shipwright.io/verify.repository: "true"
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: docker-build

Note: The Build controller only validates two scenarios. The first one is when the endpoint uses an http/https protocol. The second one is when an ssh protocol such as git@ has been defined but a referenced secret, such as source.git.cloneSecret, has not been provided.

Example of a Build with a source with credentials defined by the user.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildpack-nodejs-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/sclorg/nodejs-ex
      cloneSecret: source-repository-credentials

Example of a Build with a source that specifies a specific subfolder on the repository.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-custom-context-dockerfile
spec:
  source:
    type: Git
    git:
      url: https://github.com/SaschaSchwarze0/npm-simple
    contextDir: renamed

Example of a Build that specifies the tag v0.1.0 for the git repository:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
      revision: v0.1.0
    contextDir: docker-build

Example of a Build that specifies environment variables:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: docker-build
  env:
    - name: EXAMPLE_VAR_1
      value: "example-value-1"
    - name: EXAMPLE_VAR_2
      value: "example-value-2"

Example of a Build that uses the Kubernetes Downward API to expose a Pod field as an environment variable:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: docker-build
  env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name

Example of a Build that uses the Kubernetes Downward API to expose a Container field as an environment variable:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: docker-build
  env:
    - name: MEMORY_LIMIT
      valueFrom:
        resourceFieldRef:
          containerName: my-container
          resource: limits.memory

Defining the Strategy

A Build resource can specify the BuildStrategy to use, these are:

Defining the strategy is straightforward. You define the name and the kind. For example:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildpack-nodejs-build
spec:
  strategy:
    name: buildpacks-v3
    kind: ClusterBuildStrategy

Defining ParamValues

A Build resource can specify paramValues for parameters that are defined in the referenced BuildStrategy. You specify these parameter values to control how the steps of the build strategy behave. You can overwrite values in the BuildRun resource. See the related documentation for more information.

The build strategy author can define a parameter as either a simple string or an array. Depending on that, you must specify the value accordingly. The build strategy parameter can be specified with a default value. You must specify a value in the Build or BuildRun for parameters without a default.

You can either specify values directly or reference keys from ConfigMaps and Secrets. Note: the usage of ConfigMaps and Secrets is limited by the usage of the parameter in the build strategy steps. You can only use them if the parameter is used in the command, arguments, or environment variable values.

When using paramValues, users should avoid:

  • Defining a spec.paramValues name that doesn’t match one of the spec.parameters defined in the BuildStrategy.
  • Defining a spec.paramValues name that collides with the Shipwright reserved parameters. These are BUILDER_IMAGE, DOCKERFILE, CONTEXT_DIR, and any name starting with shp-.

In general, paramValues are tightly bound to Strategy parameters. Please make sure you understand the contents of your strategy of choice before defining paramValues in the Build.

Example

The BuildKit sample BuildStrategy contains various parameters. Two of them are outlined here:

apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
  name: buildkit
  ...
spec:
  parameters:
  - name: build-args
    description: "The ARG values in the Dockerfile. Values must be in the format KEY=VALUE."
    type: array
    defaults: []
  - name: cache
    description: "Configure BuildKit's cache usage. Allowed values are 'disabled' and 'registry'. The default is 'registry'."
    type: string
    default: registry
  ...
  steps:
  ...

The cache parameter is a simple string. You can provide it like this in your Build:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: a-build
  namespace: a-namespace
spec:
  paramValues:
  - name: cache
    value: disabled
  strategy:
    name: buildkit
    kind: ClusterBuildStrategy
  source:
  ...
  output:
  ...

If you have multiple Builds and want to control this parameter centrally, then you can create a ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: buildkit-configuration
  namespace: a-namespace
data:
  cache: disabled

You reference the ConfigMap as a parameter value like this:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: a-build
  namespace: a-namespace
spec:
  paramValues:
  - name: cache
    configMapValue:
      name: buildkit-configuration
      key: cache
  strategy:
    name: buildkit
    kind: ClusterBuildStrategy
  source:
  ...
  output:
  ...

The build-args parameter is defined as an array. In the BuildKit strategy, you use build-args to set the ARG values in the Dockerfile, specified as key-value pairs separated by an equals sign, for example, NODE_VERSION=16. Your Build then looks like this (the value for cache is retained to outline how multiple paramValue can be set):

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: a-build
  namespace: a-namespace
spec:
  paramValues:
  - name: cache
    configMapValue:
      name: buildkit-configuration
      key: cache
  - name: build-args
    values:
    - value: NODE_VERSION=16
  strategy:
    name: buildkit
    kind: ClusterBuildStrategy
  source:
  ...
  output:
  ...

Like simple values, you can also reference ConfigMaps and Secrets for every item in the array. Example:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: a-build
  namespace: a-namespace
spec:
  paramValues:
  - name: cache
    configMapValue:
      name: buildkit-configuration
      key: cache
  - name: build-args
    values:
    - configMapValue:
        name: project-configuration
        key: node-version
        format: NODE_VERSION=${CONFIGMAP_VALUE}
    - value: DEBUG_MODE=true
    - secretValue:
        name: npm-registry-access
        key: npm-auth-token
        format: NPM_AUTH_TOKEN=${SECRET_VALUE}
  strategy:
    name: buildkit
    kind: ClusterBuildStrategy
  source:
  ...
  output:
  ...

Here, we pass three items in the build-args array:

  1. The first item references a ConfigMap. Because the ConfigMap just contains the value (for example "16") as the data of the node-version key, the format setting is used to prepend NODE_VERSION= to make it a complete key-value pair.
  2. The second item is just a hard-coded value.
  3. The third item references a Secret, the same as with ConfigMaps.

Note: The logging output of BuildKit contains expanded ARGs in RUN commands. Also, such information ends up in the final container image if you use such args in the final stage of your Dockerfile. An alternative approach to pass secrets is using secret mounts. The BuildKit sample strategy supports them using the secrets parameter.

Defining the Builder or Dockerfile

In the Build resource, you use the parameters (spec.paramValues) to specify the image that contains the tools to build the final image. For example, the following Build definition specifies a Dockerfile image.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: docker-build
  strategy:
    name: buildah
    kind: ClusterBuildStrategy
  paramValues:
  - name: dockerfile
    value: Dockerfile

Another example is when the user chooses the builder image for a specific language as part of the source-to-image buildStrategy:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: s2i-nodejs-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-nodejs
    contextDir: source-build/
  strategy:
    name: source-to-image
    kind: ClusterBuildStrategy
  paramValues:
  - name: builder-image
    value: "docker.io/centos/nodejs-10-centos7"

Defining the Output

A Build resource can specify the output where it should push the image. For external private registries, it is recommended to specify a secret with the related data to access it. An option is available to specify the annotation and labels for the output image. The annotations and labels mentioned here are specific to the container image and do not relate to the Build annotations. Analogous, the timestamp refers to the timestamp of the output image.

Note: When you specify annotations, labels, or timestamp, the output image may get pushed twice, depending on the respective strategy. For example, strategies that push the image to the registry as part of their build step will lead to an additional push of the image in case image processing like labels is configured. If you have automation based on push events in your container registry, be aware of this behavior.

For example, the user specifies a public registry:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: s2i-nodejs-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-nodejs
    contextDir: source-build/
  strategy:
    name: source-to-image
    kind: ClusterBuildStrategy
  paramValues:
  - name: builder-image
    value: "docker.io/centos/nodejs-10-centos7"
  output:
    image: image-registry.openshift-image-registry.svc:5000/build-examples/nodejs-ex

Another example is when the user specifies a private registry:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: s2i-nodejs-build
spec:
  source:
    git:
      url: https://github.com/shipwright-io/sample-nodejs
    contextDir: source-build/
  strategy:
    name: source-to-image
    kind: ClusterBuildStrategy
  paramValues:
  - name: builder-image
    value: "docker.io/centos/nodejs-10-centos7"
  output:
    image: us.icr.io/source-to-image-build/nodejs-ex
    pushSecret: icr-knbuild

Example of user specifies image annotations and labels:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: s2i-nodejs-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-nodejs
    contextDir: source-build/
  strategy:
    name: source-to-image
    kind: ClusterBuildStrategy
  paramValues:
  - name: builder-image
    value: "docker.io/centos/nodejs-10-centos7"
  output:
    image: us.icr.io/source-to-image-build/nodejs-ex
    pushSecret: icr-knbuild
    annotations:
      "org.opencontainers.image.source": "https://github.com/org/repo"
      "org.opencontainers.image.url": "https://my-company.com/images"
    labels:
      "maintainer": "team@my-company.com"
      "description": "This is my cool image"

Example of user specified image timestamp set to SourceTimestamp to set the output timestamp to match the timestamp of the Git commit used for the build:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: sample-go-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: source-build
  strategy:
    name: buildkit
    kind: ClusterBuildStrategy
  output:
    image: some.registry.com/namespace/image:tag
    pushSecret: credentials
    timestamp: SourceTimestamp

Defining the vulnerabilityScan

vulnerabilityScan provides configurations to run a scan for your generated image.

  • vulnerabilityScan.enabled - Specify whether to run vulnerability scan for image. The supported values are true and false.
  • vulnerabilityScan.failOnFinding - indicates whether to fail the build run if the vulnerability scan results in vulnerabilities. The supported values are true and false. This field is optional and false by default.
  • vulnerabilityScan.ignore.issues - references the security issues to be ignored in vulnerability scan
  • vulnerabilityScan.ignore.severity - denotes the severity levels of security issues to be ignored, valid values are:
    • low: it will exclude low severity vulnerabilities, displaying only medium, high and critical vulnerabilities
    • medium: it will exclude low and medium severity vulnerabilities, displaying only high and critical vulnerabilities
    • high: it will exclude low, medium and high severity vulnerabilities, displaying only the critical vulnerabilities
  • vulnerabilityScan.ignore.unfixed - indicates to ignore vulnerabilities for which no fix exists. The supported types are true and false.

Example of user specified image vulnerability scanning options:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: sample-go-build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
    contextDir: source-build
  strategy:
    name: buildkit
    kind: ClusterBuildStrategy
  output:
    image: some.registry.com/namespace/image:tag
    pushSecret: credentials
    vulnerabilityScan:
      enabled: true
      failOnFinding: true
      ignore:
        issues:
          - CVE-2022-12345
        severity: Low
        unfixed: true

Annotations added to the output image can be verified by running the command:

docker manifest inspect us.icr.io/source-to-image-build/nodejs-ex | jq ".annotations"

You can verify which labels were added to the output image that is available on the host machine by running the command:

docker inspect us.icr.io/source-to-image-build/nodejs-ex | jq ".[].Config.Labels"

Defining Retention Parameters

A Build resource can specify how long a completed BuildRun can exist and the number of buildruns that have failed or succeeded that should exist. Instead of manually cleaning up old BuildRuns, retention parameters provide an alternate method for cleaning up BuildRuns automatically.

As part of the retention parameters, we have the following fields:

  • retention.atBuildDeletion - Defines if all related BuildRuns needs to be deleted when deleting the Build. The default is false.
  • retention.succeededLimit - Defines number of succeeded BuildRuns for a Build that can exist.
  • retention.failedLimit - Defines number of failed BuildRuns for a Build that can exist.
  • retention.ttlAfterFailed - Specifies the duration for which a failed buildrun can exist.
  • retention.ttlAfterSucceeded - Specifies the duration for which a successful buildrun can exist.

An example of a user using both TTL and Limit retention fields. In case of such a configuration, BuildRun will get deleted once the first criteria is met.

  apiVersion: shipwright.io/v1beta1
  kind: Build
  metadata:
    name: build-retention-ttl
  spec:
    source:
      type: Git
      git:
        url: "https://github.com/shipwright-io/sample-go"
      contextDir: docker-build
    strategy:
      kind: ClusterBuildStrategy
    output:
    ...
    retention:
      ttlAfterFailed: 30m
      ttlAfterSucceeded: 1h
      failedLimit: 10
      succeededLimit: 20

Note: When changes are made to retention.failedLimit and retention.succeededLimit values, they come into effect as soon as the build is applied, thereby enforcing the new limits. On the other hand, changing the retention.ttlAfterFailed and retention.ttlAfterSucceeded values will only affect new buildruns. Old buildruns will adhere to the old TTL retention values. In case TTL values are defined in buildrun specifications as well as build specifications, priority will be given to the values defined in the buildrun specifications.

Defining Volumes

Builds can declare volumes. They must override volumes defined by the according BuildStrategy. If a volume is not overridable then the BuildRun will eventually fail.

Volumes follow the declaration of Pod Volumes, so all the usual volumeSource types are supported.

Here is an example of Build object that overrides volumes:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: build-name
spec:
  source:
    type: Git
    git:
      url: https://github.com/example/url
  strategy:
    name: buildah
    kind: ClusterBuildStrategy
  paramValues:
  - name: dockerfile
    value: Dockerfile
  output:
    image: registry/namespace/image:latest
  volumes:
    - name: volume-name
      configMap:
        name: test-config

Defining Triggers

Using the triggers, you can submit BuildRun instances when certain events happen. The idea is to be able to trigger Shipwright builds in an event driven fashion, for that purpose you can watch certain types of events.

Note: triggers rely on the Shipwright Triggers project to be deployed and configured in the same Kubernetes cluster where you run Shipwright Build. If it is not set up, the triggers defined in a Build are ignored.

The types of events under watch are defined on the .spec.trigger attribute, please consider the following example:

apiVersion: shipwright.io/v1beta1
kind: Build
spec:
  source:
    type: Git
    git:
      url: https://github.com/shipwright-io/sample-go
      cloneSecret: webhook-secret
    contextDir: docker-build
  trigger:
    when: []

Certain types of events will use attributes defined on .spec.source to complete the information needed in order to dispatch events.

GitHub

The GitHub type is meant to react upon events coming from GitHub WebHook interface, the events are compared against the existing Build resources, and therefore it can identify the Build objects based on .spec.source.git.url combined with the attributes on .spec.trigger.when[].github.

To identify a given Build object, the first criteria is the repository URL, and then the branch name listed on the GitHub event payload must also match. Following the criteria:

  • First, the branch name is checked against the .spec.trigger.when[].github.branches entries
  • If the .spec.trigger.when[].github.branches is empty, the branch name is compared against .spec.source.git.revision
  • If spec.source.git.revision is empty, the default revision name is used (“main”)

The following snippet shows a configuration matching Push and PullRequest events on the main branch, for example:

# [...]
spec:
  source:
    git:
      url: https://github.com/shipwright-io/sample-go
  trigger:
    when:
      - name: push and pull-request on the main branch
        type: GitHub
        github:
          events:
            - Push
            - PullRequest
          branches:
            - main

Image

In order to watch over images, in combination with the Image controller, you can trigger new builds when those container image names change.

For instance, lets imagine the image named ghcr.io/some/base-image is used as input for the Build process and every time it changes we would like to trigger a new build. Please consider the following snippet:

# [...]
spec:
  trigger:
    when:
      - name: watching for the base-image changes
        type: Image
        image:
          names:
            - ghcr.io/some/base-image:latest

Tekton Pipeline

Shipwright can also be used in combination with Tekton Pipeline, you can configure the Build to watch for Pipeline resources in Kubernetes reacting when the object reaches the desired status (.objectRef.status), and is identified either by its name (.objectRef.name) or a label selector (.objectRef.selector). The example below uses the label selector approach:

# [...]
spec:
  trigger:
    when:
      - name: watching over for the Tekton Pipeline
        type: Pipeline
        objectRef:
          status:
            - Succeeded
          selector:
            label: value

While the next snippet uses the object name for identification:

# [...]
spec:
  trigger:
    when:
      - name: watching over for the Tekton Pipeline
        type: Pipeline
        objectRef:
          status:
            - Succeeded
          name: tekton-pipeline-name

BuildRun Deletion

A Build can automatically delete a related BuildRun. To enable this feature set the spec.retention.atBuildDeletion to true in the Build instance. The default value is set to false. See an example of how to define this field:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: kaniko-golang-build
spec:
  retention:
    atBuildDeletion: true
  # [...]

2.3 - BuildRun

Overview

The resource BuildRun (buildruns.shipwright.io/v1beta1) is the build process of a Build resource definition executed in Kubernetes.

A BuildRun resource allows the user to define:

  • The BuildRun name, through which the user can monitor the status of the image construction.
  • A referenced Build instance to use during the build construction.
  • A service account for hosting all related secrets to build the image.

A BuildRun is available within a namespace.

BuildRun Controller

The controller watches for:

  • Updates on a Build resource (CRD instance)
  • Updates on a TaskRun resource (CRD instance)

When the controller reconciles it:

  • Looks for any existing owned TaskRuns and updates its parent BuildRun status.
  • Retrieves the specified SA and sets this with the specify output secret on the Build resource.
  • If one does not exist, it generates a new tekton TaskRun and sets a reference to this resource(as a child of the controller).
  • On any subsequent updates on the TaskRun, the controller will update the parent BuildRun resource instance.

Configuring a BuildRun

The BuildRun definition supports the following fields:

  • Required:

    • apiVersion - Specifies the API version, for example shipwright.io/v1beta1.
    • kind - Specifies the Kind type, for example BuildRun.
    • metadata - Metadata that identify the CRD instance, for example the name of the BuildRun.
  • Optional:

    • spec.build.name - Specifies an existing Build resource instance to use.
    • spec.build.spec - Specifies an embedded (transient) Build resource to use.
    • spec.serviceAccount - Refers to the SA to use when building the image. (defaults to the default SA)
    • spec.timeout - Defines a custom timeout. The value needs to be parsable by ParseDuration, for example, 5m. The value overwrites the value that is defined in the Build.
    • spec.paramValues - Refers to a name-value(s) list to specify values for parameters defined in the BuildStrategy. This value overwrites values defined with the same name in the Build.
    • spec.output.image - Refers to a custom location where the generated image would be pushed. The value will overwrite the output.image value defined in Build. (Note: other properties of the output, for example, the credentials, cannot be specified in the buildRun spec. )
    • spec.output.pushSecret - Reference an existing secret to get access to the container registry. This secret will be added to the service account along with the ones requested by the Build.
    • spec.output.timestamp - Overrides the output timestamp configuration of the referenced build to instruct the build to change the output image creation timestamp to the specified value. When omitted, the respective build strategy tool defines the output image timestamp.
    • spec.output.vulnerabilityScan - Overrides the output vulnerabilityScan configuration of the referenced build to run the vulnerability scan for the generated image.
    • spec.env - Specifies additional environment variables that should be passed to the build container. Overrides any environment variables that are specified in the Build resource. The available variables depend on the tool used by the chosen build strategy.
    • spec.nodeSelector - Specifies a selector which must match a node’s labels for the build pod to be scheduled on that node.

Note: The spec.build.name and spec.build.spec are mutually exclusive. Furthermore, the overrides for timeout, paramValues, output, and env can only be combined with spec.build.name, but not with spec.build.spec.

Defining the Build Reference

A BuildRun resource can reference a Build resource, that indicates what image to build. For example:

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buildpack-nodejs-buildrun-namespaced
spec:
  build:
    name: buildpack-nodejs-build-namespaced

Defining the Build Specification

A complete BuildSpec can be embedded into the BuildRun for the build.

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: standalone-buildrun
spec:
  build:
    spec:
      source:
        type: Git
        git:
          url: https://github.com/shipwright-io/sample-go.git
        contextDir: source-build
      strategy:
        kind: ClusterBuildStrategy
        name: buildpacks-v3
      output:
        image: foo/bar:latest

Defining the Build Source

BuildRun’s support the specification of a Local type source. This is useful for working on development mode, without forcing a user to commit/push changes to their related version control system. For more information please refer to SHIP 0016 - enabling local source code.

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: local-buildrun
spec:
  build:
    name: a-build
  source:
    type: Local
    local:
      name: local-source
      timeout: 3m

Defining ParamValues

A BuildRun resource can define paramValues for parameters specified in the build strategy. If a value has been provided for a parameter with the same name in the Build already, then the value from the BuildRun will have precedence.

For example, the following BuildRun overrides the value for sleep-time param, which is defined in the a-build Build resource.

---
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: a-build
  namespace: a-namespace
spec:
  paramValues:
  - name: cache
    value: disabled
  strategy:
    name: buildkit
    kind: ClusterBuildStrategy
  source:
  ...
  output:
  ...

---
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: a-buildrun
  namespace: a-namespace
spec:
  build:
    name: a-build
  paramValues:
  - name: cache
    value: registry

See more about paramValues usage in the related Build resource docs.

Defining the ServiceAccount

A BuildRun resource can define a serviceaccount to use. Usually this SA will host all related secrets referenced on the Build resource, for example:

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buildpack-nodejs-buildrun-namespaced
spec:
  build:
    name: buildpack-nodejs-build-namespaced
  serviceAccount: pipeline

You can also set the value of spec.serviceAccount to ".generate". This will generate the service account during runtime for you. The name of the generated service account is the same as that of the BuildRun.

Note: When the service account is not defined, the BuildRun uses the pipeline service account if it exists in the namespace, and falls back to the default service account.

Defining Retention Parameters

A Buildrun resource can specify how long a completed BuildRun can exist. Instead of manually cleaning up old BuildRuns, retention parameters provide an alternate method for cleaning up BuildRuns automatically.

As part of the buildrun retention parameters, we have the following fields:

  • retention.ttlAfterFailed - Specifies the duration for which a failed buildrun can exist.
  • retention.ttlAfterSucceeded - Specifies the duration for which a successful buildrun can exist.

An example of a user using buildrun TTL parameters.

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buidrun-retention-ttl
spec:
  build:
    name: build-retention-ttl
  retention:
    ttlAfterFailed: 10m
    ttlAfterSucceeded: 10m

Note: In case TTL values are defined in buildrun specifications as well as build specifications, priority will be given to the values defined in the buildrun specifications.

Defining Volumes

BuildRuns can declare volumes. They must override volumes defined by the according BuildStrategy. If a volume is not overridable then the BuildRun will eventually fail.

In case Build and BuildRun that refers to this Build override the same volume, one that is defined in the BuildRun is the one used eventually.

Volumes follow the declaration of Pod Volumes, so all the usual volumeSource types are supported.

Here is an example of BuildRun object that overrides volumes:

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buildrun-name
spec:
  build:
    name: build-name
  volumes:
    - name: volume-name
      configMap:
        name: test-config

Canceling a BuildRun

To cancel a BuildRun that’s currently executing, update its status to mark it as canceled.

When you cancel a BuildRun, the underlying TaskRun is marked as canceled per the Tekton cancel TaskRun feature.

Example of canceling a BuildRun:

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buildpack-nodejs-buildrun-namespaced
spec:
  # [...]
  state: "BuildRunCanceled"

Automatic BuildRun deletion

We have two controllers that ensure that buildruns can be deleted automatically if required. This is ensured by adding retention parameters in either the build specifications or the buildrun specifications.

  • Buildrun TTL parameters: These are used to make sure that buildruns exist for a fixed duration of time after completiion.
    • buildrun.spec.retention.ttlAfterFailed: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has failed.
    • buildrun.spec.retention.ttlAfterSucceeded: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has succeeded.
  • Build TTL parameters: These are used to make sure that related buildruns exist for a fixed duration of time after completion.
    • build.spec.retention.ttlAfterFailed: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has failed.
    • build.spec.retention.ttlAfterSucceeded: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has succeeded.
  • Build Limit parameters: These are used to make sure that related buildruns exist for a fixed duration of time after completiion.
    • build.spec.retention.succeededLimit - Defines number of succeeded BuildRuns for a Build that can exist.
    • build.spec.retention.failedLimit - Defines number of failed BuildRuns for a Build that can exist.

Specifying Environment Variables

An example of a BuildRun that specifies environment variables:

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buildpack-nodejs-buildrun-namespaced
spec:
  build:
    name: buildpack-nodejs-build-namespaced
  env:
    - name: EXAMPLE_VAR_1
      value: "example-value-1"
    - name: EXAMPLE_VAR_2
      value: "example-value-2"

Example of a BuildRun that uses the Kubernetes Downward API to expose a Pod field as an environment variable:

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buildpack-nodejs-buildrun-namespaced
spec:
  build:
    name: buildpack-nodejs-build-namespaced
  env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name

Example of a BuildRun that uses the Kubernetes Downward API to expose a Container field as an environment variable:

apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
  name: buildpack-nodejs-buildrun-namespaced
spec:
  build:
    name: buildpack-nodejs-build-namespaced
  env:
    - name: MEMORY_LIMIT
      valueFrom:
        resourceFieldRef:
          containerName: my-container
          resource: limits.memory

BuildRun Status

The BuildRun resource is updated as soon as the current image building status changes:

$ kubectl get buildrun buildpacks-v3-buildrun
NAME                    SUCCEEDED   REASON    MESSAGE   STARTTIME   COMPLETIONTIME
buildpacks-v3-buildrun  Unknown     Pending   Pending   1s

And finally:

$ kubectl get buildrun buildpacks-v3-buildrun
NAME                    SUCCEEDED   REASON      MESSAGE                              STARTTIME   COMPLETIONTIME
buildpacks-v3-buildrun  True        Succeeded   All Steps have completed executing   4m28s       16s

The above allows users to get an overview of the building mechanism state.

Understanding the state of a BuildRun

A BuildRun resource stores the relevant information regarding the object’s state under status.conditions.

Conditions allow users to quickly understand the resource state without needing to understand resource-specific details.

For the BuildRun, we use a Condition of the type Succeeded, which is a well-known type for resources that run to completion.

The status.conditions hosts different fields, like status, reason and message. Users can expect these fields to be populated with relevant information.

The following table illustrates the different states a BuildRun can have under its status.conditions:

StatusReasonCompletionTime is setDescription
UnknownPendingNoThe BuildRun is waiting on a Pod in status Pending.
UnknownRunningNoThe BuildRun has been validated and started to perform its work.
UnknownRunningNoThe BuildRun has been validated and started to perform its work.
UnknownBuildRunCanceledNoThe user requested the BuildRun to be canceled. This results in the BuildRun controller requesting the TaskRun be canceled. Cancellation has not been done yet.
TrueSucceededYesThe BuildRun Pod is done.
FalseFailedYesThe BuildRun failed in one of the steps.
FalseBuildRunTimeoutYesThe BuildRun timed out.
FalseUnknownStrategyKindYesThe Build specified strategy Kind is unknown. (options: ClusterBuildStrategy or BuildStrategy)
FalseClusterBuildStrategyNotFoundYesThe referenced cluster strategy was not found in the cluster.
FalseBuildStrategyNotFoundYesThe referenced namespaced strategy was not found in the cluster.
FalseSetOwnerReferenceFailedYesSetting ownerreferences from the BuildRun to the related TaskRun failed.
FalseTaskRunIsMissingYesThe BuildRun related TaskRun was not found.
FalseTaskRunGenerationFailedYesThe generation of a TaskRun spec failed.
FalseMissingParameterValuesYesNo value has been provided for some parameters that are defined in the build strategy without any default. Values for those parameters must be provided through the Build or the BuildRun.
FalseRestrictedParametersInUseYesA value for a system parameter was provided. This is not allowed.
FalseUndefinedParameterYesA value for a parameter was provided that is not defined in the build strategy.
FalseWrongParameterValueTypeYesA value was provided for a build strategy parameter using the wrong type. The parameter is defined as array or string in the build strategy. Depending on that, you must provide values or a direct value.
FalseInconsistentParameterValuesYesA value for a parameter contained more than one of value, configMapValue, and secretValue. Any values including array items must only provide one of them.
FalseEmptyArrayItemParameterValuesYesAn item inside the values of an array parameter contained none of value, configMapValue, and secretValue. Exactly one of them must be provided. Null array items are not allowed.
FalseIncompleteConfigMapValueParameterValuesYesA value for a parameter contained a configMapValue where the name or the value were empty. You must specify them to point to an existing ConfigMap key in your namespace.
FalseIncompleteSecretValueParameterValuesYesA value for a parameter contained a secretValue where the name or the value were empty. You must specify them to point to an existing Secret key in your namespace.
FalseServiceAccountNotFoundYesThe referenced service account was not found in the cluster.
FalseBuildRegistrationFailedYesThe related Build in the BuildRun is in a Failed state.
FalseBuildNotFoundYesThe related Build in the BuildRun was not found.
FalseBuildRunCanceledYesThe BuildRun and underlying TaskRun were canceled successfully.
FalseBuildRunNameInvalidYesThe defined BuildRun name (metadata.name) is invalid. The BuildRun name should be a valid label value.
FalseBuildRunNoRefOrSpecYesBuildRun does not have either spec.build.name or spec.build.spec defined. There is no connection to a Build specification.
FalseBuildRunAmbiguousBuildYesThe defined BuildRun uses both spec.build.name and spec.build.spec. Only one of them is allowed at the same time.
FalseBuildRunBuildFieldOverrideForbiddenYesThe defined BuildRun uses an override (e.g. timeout, paramValues, output, or env) in combination with spec.build.spec, which is not allowed. Use the spec.build.spec to directly specify the respective value.
FalsePodEvictedYesThe BuildRun Pod was evicted from the node it was running on. See API-initiated Eviction and Node-pressure Eviction for more information.
FalseStepOutOfMemoryYesThe BuildRun Pod failed because a step went out of memory.

Note: We heavily rely on the Tekton TaskRun Conditions for populating the BuildRun ones, with some exceptions.

Understanding failed BuildRuns

To make it easier for users to understand why did a BuildRun failed, users can infer the pod and container where the failure took place from the status.failureDetails field.

In addition, the status.conditions hosts a compacted message under the message field that contains the kubectl command to trigger and retrieve the logs.

The status.failureDetails field also includes a detailed failure reason and message, if the build strategy provides them.

Example of failed BuildRun:

# [...]
status:
  # [...]
  failureDetails:
    location:
      container: step-source-default
      pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr
    message: The source repository does not exist, or you have insufficient permission
      to access it.
    reason: GitRemotePrivate

Understanding failed BuildRuns due to VulnerabilitiesFound

A buildrun can be failed, if the vulnerability scan finds vulnerabilities in the generated image and failOnFinding is set to true in the vulnerabilityScan. For setting vulnerabilityScan, see here.

Example of failed BuildRun due to vulnerabilities present in the image:

# [...]
status:
  # [...]
  conditions:
  - type: Succeeded
    lastTransitionTime: "2024-03-12T20:00:38Z"
    status: "False"
    reason: VulnerabilitiesFound
    message: "Vulnerabilities have been found in the output image. For detailed information, check buildrun status or see kubectl --namespace default logs vuln-s6skc-v7wd2-pod --container step-image-processing"

Understanding failed git-source step

All git-related operations support error reporting via status.failureDetails. The following table explains the possible error reasons:

ReasonDescription
GitAuthInvalidUserOrPassBasic authentication has failed. Check your username or password. Note: GitHub requires a personal access token instead of your regular password.
GitAuthInvalidKeyThe key is invalid for the specified target. Please make sure that the Git repository exists, you have sufficient permissions, and the key is in the right format.
GitRevisionNotFoundThe remote revision does not exist. Check the revision specified in your Build.
GitRemoteRepositoryNotFoundThe source repository does not exist, or you have insufficient permissions to access it.
GitRemoteRepositoryPrivateYou are trying to access a non-existing or private repository without having sufficient permissions to access it via HTTPS.
GitBasicAuthIncompleteBasic Auth incomplete: Both username and password must be configured.
GitSSHAuthUnexpectedCredential/URL inconsistency: SSH credentials were provided, but the URL is not an SSH Git URL.
GitSSHAuthExpectedCredential/URL inconsistency: No SSH credentials provided, but the URL is an SSH Git URL.
GitErrorThe specific error reason is unknown. Check the error message for more information.

Step Results in BuildRun Status

After completing a BuildRun, the .status field contains the results (.status.taskResults) emitted from the TaskRun steps generated by the BuildRun controller as part of processing the BuildRun. These results contain valuable metadata for users, like the image digest or the commit sha of the source code used for building. The results from the source step will be surfaced to the .status.sources, and the results from the output step will be surfaced to the .status.output field of a BuildRun.

Example of a BuildRun with surfaced results for git source (note that the branchName is only included if the Build does not specify any revision):

# [...]
status:
  buildSpec:
    # [...]
  output:
    digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53
    size: 1989004
  sources:
  - name: default
    git:
      commitAuthor: xxx xxxxxx
      commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde
      branchName: main

Another example of a BuildRun with surfaced results for local source code(ociArtifact) source:

# [...]
status:
  buildSpec:
    # [...]
  output:
    digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53
    size: 1989004
  sources:
  - name: default
    ociArtifact:
      digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7

Note: The digest and size of the output image are only included if the build strategy provides them. See System results.

Another example of a BuildRun with surfaced results for vulnerability scanning.

# [...]
status:
  buildSpec:
    # [...]
  status:
  output:
    digest: sha256:1023103
    size: 12310380
    vulnerabilities:
    - id: CVE-2022-12345
      severity: high
    - id: CVE-2021-54321
      severity: medium

Note: The vulnerability scan will only run if it is specified in the build or buildrun spec. See Defining the vulnerabilityScan.

Build Snapshot

For every BuildRun controller reconciliation, the buildSpec in the status of the BuildRun is updated if an existing owned TaskRun is present. During this update, a Build resource snapshot is generated and embedded into the status.buildSpec path of the BuildRun. A buildSpec is just a copy of the original Build spec, from where the BuildRun executed a particular image build. The snapshot approach allows developers to see the original Build configuration.

Relationship with Tekton Tasks

The BuildRun resource abstracts the image construction by delegating this work to the Tekton Pipeline TaskRun. Compared to a Tekton Pipeline Task, a TaskRun runs all steps until completion of the Task or until a failure occurs in the Task.

During the Reconcile, the BuildRun controller will generate a new TaskRun. The controller will embed in the TaskRun Task definition the required steps to execute during the execution. These steps are defined in the strategy defined in the Build resource, either a ClusterBuildStrategy or a BuildStrategy.

2.4 - Authentication during builds

The following document provides an introduction around the different authentication methods that can take place during an image build when using the Build controller.

Overview

There are two places where users might need to define authentication when building images. Authentication to a container registry is the most common one, but also users might have the need to define authentications for pulling source-code from Git. Overall, the authentication is done via the definition of secrets in which the required sensitive data will be stored.

Build Secrets Annotation

Users need to add an annotation build.shipwright.io/referenced.secret: "true" to a build secret so that build controller can decide to take a reconcile action when a secret event (create, update and delete) happens. Below is a secret example with build annotation:

apiVersion: v1
data:
  .dockerconfigjson: xxxxx
kind: Secret
metadata:
  annotations:
    build.shipwright.io/referenced.secret: "true"
  name: secret-docker
type: kubernetes.io/dockerconfigjson

This annotation will help us filter secrets which are not referenced on a Build instance. That means if a secret doesn’t have this annotation, then although event happens on this secret, Build controller will not reconcile. Being able to reconcile on secrets events allow the Build controller to re-trigger validations on the Build configuration, allowing users to understand if a dependency is missing.

If you are using kubectl command create secrets, then you can first create build secret using kubectl create secret command and annotate this secret using kubectl annotate secrets. Below is an example:

kubectl -n ${namespace} create secret docker-registry example-secret --docker-server=${docker-server} --docker-username="${username}" --docker-password="${password}" --docker-email=me@here.com
kubectl -n ${namespace} annotate secrets example-secret build.shipwright.io/referenced.secret='true'

Authentication for Git

There are two ways for authenticating into Git (applies to both GitLab or GitHub): SSH and basic authentication.

SSH authentication

For the SSH authentication you must use the tekton annotations to specify the hostname(s) of the git repository providers that you use. This is github.com for GitHub, or gitlab.com for GitLab.

As seen in the following example, there are three things to notice:

  • The Kubernetes secret should be of the type kubernetes.io/ssh-auth
  • The data.ssh-privatekey can be generated by following the command example base64 <~/.ssh/id_rsa, where ~/.ssh/id_rsa is the key used to authenticate into Git.
apiVersion: v1
kind: Secret
metadata:
  name: secret-git-ssh-auth
  annotations:
    build.shipwright.io/referenced.secret: "true"
type: kubernetes.io/ssh-auth
data:
  ssh-privatekey: <base64 <~/.ssh/id_rsa>

Basic authentication

The Basic authentication is very similar to the ssh one, but with the following differences:

  • The Kubernetes secret should be of the type kubernetes.io/basic-auth
  • The stringData should host your user and personal access token in clear text.

Note: GitHub and GitLab no longer accept account passwords when authenticating Git operations. Instead, you must use token-based authentication for all authenticated Git operations. You can create your own personal access token on GitHub and GitLab.

apiVersion: v1
kind: Secret
metadata:
  name: secret-git-basic-auth
  annotations:
    build.shipwright.io/referenced.secret: "true"
type: kubernetes.io/basic-auth
stringData:
  username: <cleartext username>
  password: <cleartext token>

Usage of git secret

With the right secret in place(note: Ensure creation of secret in the proper Kubernetes namespace), users should reference it on their Build YAML definitions.

Depending on the secret type, there are two ways of doing this:

When using ssh auth, users should follow:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
spec:
  source:
    git:
      url: git@gitlab.com:eduardooli/newtaxi.git
      cloneSecret: secret-git-ssh-auth

When using basic auth, users should follow:

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
spec:
  source:
    git:
      url: https://gitlab.com/eduardooli/newtaxi.git
      cloneSecret: secret-git-basic-auth

Authentication to container registries

For pushing images to private registries, users require to define a secret in their respective namespace.

Docker Hub

Follow the following command to generate your secret:

kubectl --namespace <YOUR_NAMESPACE> create secret docker-registry <CONTAINER_REGISTRY_SECRET_NAME> \
  --docker-server=<REGISTRY_HOST> \
  --docker-username=<USERNAME> \
  --docker-password=<PASSWORD> \
  --docker-email=me@here.com
kubectl --namespace <YOUR_NAMESPACE> annotate secrets <CONTAINER_REGISTRY_SECRET_NAME> build.shipwright.io/referenced.secret='true'

Notes: When generating a secret to access docker hub, the REGISTRY_HOST value should be https://index.docker.io/v1/, the username is the Docker ID. Notes: The value of PASSWORD can be your user docker hub password, or an access token. A docker access token can be created via Account Settings, then Security in the sidebar, and the New Access Token button.

Usage of registry secret

With the right secret in place (note: Ensure creation of secret in the proper Kubernetes namespace), users should reference it on their Build YAML definitions. For container registries, the secret should be placed under the spec.output.pushSecret path.

apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
  name: buildah-golang-build
  ...
  output:
    image: docker.io/foobar/sample:latest
    pushSecret: <CONTAINER_REGISTRY_SECRET_NAME>

References

See more information in the official Tekton documentation for authentication.

2.5 - Configuration

Controller Settings

The controller is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in controller.yaml.

The following environment variables are available:

Environment VariableDescription
CTX_TIMEOUTOverride the default context timeout used for all Custom Resource Definition reconciliation operations. Default is 5 (seconds).
REMOTE_ARTIFACTS_CONTAINER_IMAGESpecify the container image used for the .spec.sources remote artifacts download, by default it uses quay.io/quay/busybox:latest.
TERMINATION_LOG_PATHPath of the termination log. This is where controller application will write the reason of its termination. Default value is /dev/termination-log.
GIT_ENABLE_REWRITE_RULEEnable Git wrapper to setup a URL insteadOf Git config rewrite rule for the respective source URL hostname. Default is false.
GIT_CONTAINER_TEMPLATEJSON representation of a Container template that is used for steps that clone a Git repository. Default is {"image": "ghcr.io/shipwright-io/build/git:latest", "command": ["/ko-app/git"], "env": [{"name": "HOME", "value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser": 1000,"runAsGroup": 1000}} 1. The following properties are ignored as they are set by the controller: args, name.
GIT_CONTAINER_IMAGECustom container image for Git clone steps. If GIT_CONTAINER_TEMPLATE is also specifying an image, then the value for GIT_CONTAINER_IMAGE has precedence.
BUNDLE_CONTAINER_TEMPLATEJSON representation of a Container template that is used for steps that pulls a bundle image to obtain the packaged source code. Default is {"image": "ghcr.io/shipwright-io/build/bundle:latest", "command": ["/ko-app/bundle"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser":1000,"runAsGroup":1000}} 1. The following properties are ignored as they are set by the controller: args, name.
BUNDLE_CONTAINER_IMAGECustom container image that pulls a bundle image to obtain the packaged source code. If BUNDLE_IMAGE_CONTAINER_TEMPLATE is also specifying an image, then the value for BUNDLE_IMAGE_CONTAINER_IMAGE has precedence.
IMAGE_PROCESSING_CONTAINER_TEMPLATEJSON representation of a Container template that is used for steps that processes the image. Default is {"image": "ghcr.io/shipwright-io/build/image-processing:latest", "command": ["/ko-app/image-processing"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext": {"allowPrivilegeEscalation": false, "capabilities": {"add": ["DAC_OVERRIDE"], "drop": ["ALL"]}, "runAsUser": 0, "runAsgGroup": 0}}. The following properties are ignored as they are set by the controller: args, name.
IMAGE_PROCESSING_CONTAINER_IMAGECustom container image that is used for steps that processes the image. If IMAGE_PROCESSING_CONTAINER_TEMPLATE is also specifying an image, then the value for IMAGE_PROCESSING_CONTAINER_IMAGE has precedence.
WAITER_CONTAINER_TEMPLATEJSON representation of a Container template that waits for local source code to be uploaded to it. Default is {"image":"ghcr.io/shipwright-io/build/waiter:latest", "command": ["/ko-app/waiter"], "args": ["start"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser":1000,"runAsGroup":1000}}. The following properties are ignored as they are set by the controller: args, name.
WAITER_CONTAINER_IMAGECustom container image that waits for local source code to be uploaded to it. If WAITER_IMAGE_CONTAINER_TEMPLATE is also specifying an image, then the value for WAITER_IMAGE_CONTAINER_IMAGE has precedence.
BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACESet the namespace to be used to store the shipwright-build-controller lock, by default it is in the same namespace as the controller itself.
BUILD_CONTROLLER_LEASE_DURATIONOverride the LeaseDuration, which is the duration that non-leader candidates will wait to force acquire leadership.
BUILD_CONTROLLER_RENEW_DEADLINEOverride the RenewDeadline, which is the duration that the acting leader will retry refreshing leadership before giving up.
BUILD_CONTROLLER_RETRY_PERIODOverride the RetryPeriod, which is the duration the LeaderElector clients should wait between tries of actions.
BUILD_MAX_CONCURRENT_RECONCILESThe number of concurrent reconciles by the build controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
BUILDRUN_MAX_CONCURRENT_RECONCILESThe number of concurrent reconciles by the BuildRun controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
BUILDSTRATEGY_MAX_CONCURRENT_RECONCILESThe number of concurrent reconciles by the BuildStrategy controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
CLUSTERBUILDSTRATEGY_MAX_CONCURRENT_RECONCILESThe number of concurrent reconciles by the ClusterBuildStrategy controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0.
KUBE_API_BURSTBurst to use for the Kubernetes API client. See Config.Burst. A value of 0 or lower will use the default from client-go, which currently is 10. Default is 0.
KUBE_API_QPSQPS to use for the Kubernetes API client. See Config.QPS. A value of 0 or lower will use the default from client-go, which currently is 5. Default is 0.
VULNERABILITY_COUNT_LIMITholds vulnerability count limit if vulnerability scan is enabled for the output image. If it is defined as 10, then it will output only 10 vulnerabilities sorted by severity in the buildrun status.Output. Default is 50.

Role-based Access Control

The release deployment YAML file includes two cluster-wide roles for using Shipwright Build objects. The following roles are installed:

  • shpwright-build-aggregate-view: this role grants read access (get, list, watch) to most Shipwright Build objects. This includes BuildStrategy, ClusterBuildStrategy, Build, and BuildRun objects. This role is aggregated to the Kubernetes “view” role.
  • shipwright-build-aggregate-edit: this role grants write access (create, update, patch, delete) to Shipwright objects that are namespace-scoped. This includes BuildStrategy, Builds, and BuildRuns. Read access is granted to all ClusterBuildStrategy objects. This role is aggregated to the Kubernetes “edit” and “admin” roles.

Only cluster administrators are granted write access to ClusterBuildStrategy objects. This can be changed by creating a separate Kubernetes ClusterRole with these permissions and binding the role to appropriate users.


  1. The runAsUser and runAsGroup are dynamically overwritten depending on the build strategy that is used. See Security Contexts for more information. ↩︎ ↩︎

2.6 - Build Controller Metrics

The Build component exposes several metrics to help you monitor the health and behavior of your build resources.

Following build metrics are exposed on port 8383.

NameTypeDescriptionLabelsStatus
build_builds_registered_totalCounterNumber of total registered Builds.buildstrategy=<build_buildstrategy_name> 1
namespace=<buildrun_namespace> 1
build=<build_name> 1
experimental
build_buildruns_completed_totalCounterNumber of total completed BuildRuns.buildstrategy=<build_buildstrategy_name> 1
namespace=<buildrun_namespace> 1
build=<build_name> 1
buildrun=<buildrun_name> 1
experimental
build_buildrun_establish_duration_secondsHistogramBuildRun establish duration in seconds.buildstrategy=<build_buildstrategy_name> 1
namespace=<buildrun_namespace> 1
build=<build_name> 1
buildrun=<buildrun_name> 1
experimental
build_buildrun_completion_duration_secondsHistogramBuildRun completion duration in seconds.buildstrategy=<build_buildstrategy_name> 1
namespace=<buildrun_namespace> 1
build=<build_name> 1
buildrun=<buildrun_name> 1
experimental
build_buildrun_rampup_duration_secondsHistogramBuildRun ramp-up duration in secondsbuildstrategy=<build_buildstrategy_name> 1
namespace=<buildrun_namespace> 1
build=<build_name> 1
buildrun=<buildrun_name> 1
experimental
build_buildrun_taskrun_rampup_duration_secondsHistogramBuildRun taskrun ramp-up duration in seconds.buildstrategy=<build_buildstrategy_name> 1
namespace=<buildrun_namespace> 1
build=<build_name> 1
buildrun=<buildrun_name> 1
experimental
build_buildrun_taskrun_pod_rampup_duration_secondsHistogramBuildRun taskrun pod ramp-up duration in seconds.buildstrategy=<build_buildstrategy_name> 1
namespace=<buildrun_namespace> 1
build=<build_name> 1
buildrun=<buildrun_name> 1
experimental

1 Labels for metric are disabled by default. See Configuration of metric labels to enable them.

Configuration of histogram buckets

Environment variables can be set to use custom buckets for the histogram metrics:

MetricEnvironment variableDefault
build_buildrun_establish_duration_secondsPROMETHEUS_BR_EST_DUR_BUCKETS0,1,2,3,5,7,10,15,20,30
build_buildrun_completion_duration_secondsPROMETHEUS_BR_COMP_DUR_BUCKETS50,100,150,200,250,300,350,400,450,500
build_buildrun_rampup_duration_secondsPROMETHEUS_BR_RAMPUP_DUR_BUCKETS0,1,2,3,4,5,6,7,8,9,10
build_buildrun_taskrun_rampup_duration_secondsPROMETHEUS_BR_RAMPUP_DUR_BUCKETS0,1,2,3,4,5,6,7,8,9,10
build_buildrun_taskrun_pod_rampup_duration_secondsPROMETHEUS_BR_RAMPUP_DUR_BUCKETS0,1,2,3,4,5,6,7,8,9,10

The values have to be a comma-separated list of numbers. You need to set the environment variable for the build controller for your customization to become active. When running locally, set the variable right before starting the controller:

export PROMETHEUS_BR_COMP_DUR_BUCKETS=30,60,90,120,180,240,300,360,420,480
make local

When you deploy the build controller in a Kubernetes cluster, you need to extend the spec.containers[0].spec.env section of the sample deployment file, controller.yaml. Add another entry:

[...]
  env:
  - name: PROMETHEUS_BR_COMP_DUR_BUCKETS
    value: "30,60,90,120,180,240,300,360,420,480"
[...]

Configuration of metric labels

As the amount of buckets and labels has a direct impact on the number of Prometheus time series, you can selectively enable labels that you are interested in using the PROMETHEUS_ENABLED_LABELS environment variable. The supported labels are:

  • buildstrategy
  • namespace
  • build
  • buildrun

Use a comma-separated value to enable multiple labels. For example:

export PROMETHEUS_ENABLED_LABELS=namespace
make local

or

export PROMETHEUS_ENABLED_LABELS=buildstrategy,namespace,build
make local

When you deploy the build controller in a Kubernetes cluster, you need to extend the spec.containers[0].spec.env section of the sample deployment file, controller.yaml. Add another entry:

[...]
  env:
  - name: PROMETHEUS_ENABLED_LABELS
    value: namespace
[...]

2.7 - Build Controller Profiling

The build controller supports a pprof profiling mode, which is omitted from the binary by default. To use the profiling, use the controller image that was built with pprof enabled.

Enable pprof in the build controller

In the Kubernetes cluster, edit the shipwright-build-controller deployment to use the container tag with the debug suffix.

kubectl --namespace <namespace> set image \
  deployment/shipwright-build-controller \
  shipwright-build-controller="$(kubectl --namespace <namespace> get deployment shipwright-build-controller --output jsonpath='{.spec.template.spec.containers[].image}')-debug"

Connect go pprof to build controller

Depending on the respective setup, there could be multiple build controller pods for high availability reasons. In this case, you have to look up the current leader first. The following command can be used to verify the currently active leader:

kubectl --namespace <namespace> get configmap shipwright-build-controller-lock --output json \
  | jq --raw-output '.metadata.annotations["control-plane.alpha.kubernetes.io/leader"]' \
  | jq --raw-output .holderIdentity

The pprof endpoint is not exposed in the cluster and can only be used from inside the container. Therefore, set-up port-forwarding to make the pprof port available locally.

kubectl --namespace <namespace> port-forward <controller-pod-name> 8383:8383

Now, you can set up a local webserver to browse through the profiling data.

go tool pprof -http localhost:8080 http://localhost:8383/debug/pprof/heap

Please note: For it to work, you have to have graphviz installed on your system, for example using brew install graphviz, apt-get install graphviz, yum install graphviz, or similar.

3 -

Contributing Guidelines

Welcome to Shipwright, we are glad you want to contribute to the project! This document contains general guidelines for submitting contributions. Each component of Shipwright will have its own specific guidelines.

Contributing prerequisites (CLA/DCO)

The project enforces Developer Certificate of Origin (DCO). By submitting pull requests submitters acknowledge they grant the Apache License v2 to the code and that they are eligible to grant this license for all commits submitted in their pull requests.

Getting Started

All contributors must abide by our Code of Conduct.

The core code for Shipwright is located in the following repositories:

  • build - the Build APIs and associated controller to run builds.
  • cli - the shp command line for Shipwright builds
  • operator - an operator to install Shipwright components on Kubernetes via OLM.

Technical documentation is spread across the code repositories, and is consolidated in the website repository. Content in website is published to shipwright.io

Creating new Issues

We recommend to open an issue for the following scenarios:

  • Asking for help or questions. (Use the discussion or help_wanted label)
  • Reporting a bug. (Use the kind/bug label)
  • Requesting a new feature. (Use the kind/feature label)

Use the following checklist to determine where you should create an issue:

  • If the issue is related to how a Build or BuildRun behaves, or related to Build strategies, create an issue in build.
  • If the issue is related to the command line, create an issue in cli.
  • If the issue is related to how the operator installs Shipwright on a cluster, create an issue in operator.
  • If the issue is related to the shipwright.io website, create an issue in website.

If you are not sure, create an issue in this repository, and the Shipwright maintainers will route it to the correct location.

If feature request is sufficiently broad or significant, the community may ask you to submit a SHIP enhancement proposal. Please refer to the SHIP guidelines to learn how to submit a SHIP proposal.

Writing Pull Requests

Contributions can be submitted by creating a pull request on Github. We recommend you do the following to ensure the maintainers can collaborate on your contribution:

  • Fork the project into your personal Github account
  • Create a new feature branch for your contribution
  • Make your changes
  • If you make code changes, ensure tests are passing
  • Open a PR with a clear description, completing the pull request template if one is provided Please reference the appropriate GitHub issue if your pull request provides a fix.

NOTE: All commits must be signed-off (Developer Certificate of Origin (DCO)) so make sure you use the -s flag when you commit. See more information on signing in here.

Code review process

Once your pull request is submitted, a Shipwright maintainer should be assigned to review your changes.

The code review should cover:

  • Ensure all related tests (unit, integration and e2e) are passing.
  • Ensure the code style is compliant with the coding conventions
  • Ensure the code is properly documented, e.g. enough comments where needed.
  • Ensure the code is adding the necessary test cases (unit, integration or e2e) if needed.

Contributors are expected to respond to feedback from reviewers in a constructive manner. Reviewers are expected to respond to new submissions in a timely fashion, with clear language if changes are requested.

Once the pull request is approved and marked “lgtm”, it will get merged.

Community Meetings Participation

We run the community meetings every Monday at 13:00 UTC time. For each upcoming meeting we generate a new issue where we layout the topics to discuss. See our previous meetings outcomes. Please request an invite in our Slack channel or join the shipwright-dev mailing list.

All meetings are also published on our public calendar.

Contact Information