Documentation
Shipwright is an extensible framework for building container images on Kubernetes.
Shipwright supports popular tools such as Kaniko, Cloud Native Buildpacks, Buildah, and more!
Shipwright is based around four elements for each build:
- Source code - the “what” you are trying to build
- Output image - “where” you are trying to deliver your application
- Build strategy - “how” your application is assembled
- Invocation - “when” you want to build your application
Comparison with local image builds
Developers who use Docker are familiar with this process:
- Clone source from a git-based repository (“what”)
- Build the container image (“when” and “how”)
docker build -t registry.mycompany.com/myorg/myapp:latest .
- Push the container image to your registry (“where”)
docker push registry.mycompany.com/myorg/myapp:latest
Shipwright Build APIs
Shipwright’s Build API consists of four core
CustomResourceDefinitions
(CRDs):
Build
- defines what to build, and where the application should be delivered.BuildStrategy
and ClusterBuildStrategy
- defines how to build an application for an image
building tool.BuildRun
- invokes the build.
You create a BuildRun
to tell Shipwright to start building your application.
Build
The Build
object provides a playbook on how to assemble your specific application. The simplest
build consists of a git source, a build strategy, and an output image:
apiVersion: build.dev/v1alpha1
kind: Build
metadata:
name: kaniko-golang-build
annotations:
build.build.dev/build-run-deletion: "true"
spec:
source:
url: https://github.com/sbose78/taxi
strategy:
name: kaniko
kind: ClusterBuildStrategy
output:
image: registry.mycompany.com/my-org/taxi-app:latest
Builds can be extended to push to private registries, use a different Dockerfile, and more.
BuildStrategy and ClusterBuildStrategy
BuildStrategy
and ClusterBuildStrategy
are related APIs to define how a given tool should be
used to assemble an application. They are distinguished by their scope - BuildStrategy
objects
are namespace scoped, whereas ClusterBuildStrategy
objects are cluster scoped.
The spec of a BuildStrategy
or ClusterBuildStrategy
consists of a buildSteps
object, which look and feel like Kubernetes container
specifications. Below is an example spec for Kaniko, which can build an image from a
Dockerfile within a container:
# this is a fragment of a manifest
spec:
buildSteps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v1.3.0
workingDir: /workspace/source
securityContext:
runAsUser: 0
capabilities:
add:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
- SETFCAP
env:
- name: DOCKER_CONFIG
value: /tekton/home/.docker
command:
- /kaniko/executor
args:
- --skip-tls-verify=true
- --dockerfile=$(build.dockerfile)
- --context=/workspace/source/$(build.source.contextDir)
- --destination=$(build.output.image)
- --oci-layout-path=/workspace/output/image
- --snapshotMode=redo
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 65Mi
BuildRun
Each BuildRun
object invokes a build on your cluster. You can think of these as a Kubernetes
Jobs
or Tekton TaskRuns
- they represent a workload on your cluster, ultimately resulting in a
running Pod
. See BuildRun
for more details.
Further reading
1 - Getting Started
1.1 - Installation
Install Shipwright on your Kubernetes cluster.
The Shipwright Build APIs and controllers can be installed directly with our release deployment, or
with our operator.
Prerequsites
Installing Shipwright Builds with the Operator
The Shipwright operator is designed to be installed with the Operator Lifecycle Manager (“OLM”).
Before installation, ensure that OLM has been deployed on your cluster by following the OLM installation instructions.
Installation
Once OLM has been deployed, use the following command to install the latest operator release from operatorhub.io:
$ kubectl apply -f https://operatorhub.io/install/shipwright-operator.yaml
Usage
To deploy and manage Shipwright Builds in your cluster,
first make sure this operator is installed and running.
Next, create the following:
---
apiVersion: operator.shipwright.io/v1alpha1
kind: ShipwrightBuild
metadata:
name: shipwright-operator
spec:
targetNamespace: shipwright-build
The operator will deploy Shipwright Builds in the provided targetNamespace
.
When .spec.targetNamespace
is not set, the namespace will default to shipwright-build
.
Refer to the ShipwrightBuild documentation for more information about this custom resource.
Installing Shipwright Builds Directly
We also publish a Kubernetes manifest that installs Shipwright directly into the shipwright-build
namespace.
Applying this manifest requires cluster administrator permissions:
$ kubectl apply -f https://github.com/shipwright-io/build/releases/latest/download/release.yaml
Installing Sample Build Strategies
The Shipwright community maintains a curated set of build strategies for popular build tools.
These can be optionally installed after Shipwright Builds has been deployed:
$ kubectl apply -f https://github.com/shipwright-io/build/releases/latest/download/sample-strategies.yaml
2 - Shipwright Builds
2.1 - Build
Overview
A Build
resource allows the user to define:
- source
- trigger
- strategy
- paramValues
- output
- timeout
- env
- retention
- volumes
A Build
is available within a namespace.
Build Controller
The controller watches for:
- Updates on the
Build
resource (CRD instance)
When the controller reconciles it:
- Validates if the referenced
Strategy
exists. - Validates if the specified
paramValues
exist on the referenced strategy parameters. It also validates if the paramValues
names collide with the Shipwright reserved names. - Validates if the container
registry
output secret exists. - Validates if the referenced
spec.source.git.url
endpoint exists.
Build Validations
Note: reported validations in build status are deprecated, and will be removed in a future release.
To prevent users from triggering BuildRun
s (execution of a Build) that will eventually fail because of wrong or missing dependencies or configuration settings, the Build controller will validate them in advance. If all validations are successful, users can expect a Succeeded
status.reason
. However, if any validations fail, users can rely on the status.reason
and status.message
fields to understand the root cause.
Status.Reason | Description |
---|
BuildStrategyNotFound | The referenced namespace-scope strategy doesn’t exist. |
ClusterBuildStrategyNotFound | The referenced cluster-scope strategy doesn’t exist. |
SetOwnerReferenceFailed | Setting ownerreferences between a Build and a BuildRun failed. This status is triggered when you set the spec.retention.atBuildDeletion to true in a Build. |
SpecSourceSecretRefNotFound | The secret used to authenticate to git doesn’t exist. |
SpecOutputSecretRefNotFound | The secret used to authenticate to the container registry doesn’t exist. |
SpecBuilderSecretRefNotFound | The secret used to authenticate the container registry doesn’t exist. |
MultipleSecretRefNotFound | More than one secret is missing. At the moment, only three paths on a Build can specify a secret. |
RestrictedParametersInUse | One or many defined paramValues are colliding with Shipwright reserved parameters. See Defining Params for more information. |
UndefinedParameter | One or many defined paramValues are not defined in the referenced strategy. Please ensure that the strategy defines them under its spec.parameters list. |
RemoteRepositoryUnreachable | The defined spec.source.git.url was not found. This validation only takes place for HTTP/HTTPS protocols. |
BuildNameInvalid | The defined Build name (metadata.name ) is invalid. The Build name should be a valid label value. |
SpecEnvNameCanNotBeBlank | The name for a user-provided environment variable is blank. |
SpecEnvValueCanNotBeBlank | The value for a user-provided environment variable is blank. |
SpecEnvOnlyOneOfValueOrValueFromMustBeSpecified | Both value and valueFrom were specified, which are mutually exclusive. |
RuntimePathsCanNotBeEmpty | The spec.runtime feature is used but the paths were not specified. |
WrongParameterValueType | A single value was provided for an array parameter, or vice-versa. |
InconsistentParameterValues | Parameter values have more than one of configMapValue, secretValue, or value set. |
EmptyArrayItemParameterValues | Array parameters contain an item where none of configMapValue, secretValue, or value is set. |
IncompleteConfigMapValueParameterValues | A configMapValue is specified where the name or the key is empty. |
IncompleteSecretValueParameterValues | A secretValue is specified where the name or the key is empty. |
VolumeDoesNotExist | Volume referenced by the Build does not exist, therefore Build cannot be run. |
VolumeNotOverridable | Volume defined by build is not set as overridable in the strategy. |
UndefinedVolume | Volume defined by build is not found in the strategy. |
TriggerNameCanNotBeBlank | Trigger condition does not have a name. |
TriggerInvalidType | Trigger type is invalid. |
TriggerInvalidGitHubWebHook | Trigger type GitHub is invalid. |
TriggerInvalidImage | Trigger type Image is invalid. |
TriggerInvalidPipeline | Trigger type Pipeline is invalid. |
OutputTimestampNotSupported | An unsupported output timestamp setting was used. |
OutputTimestampNotValid | The output timestamp value is not valid. |
Configuring a Build
The Build
definition supports the following fields:
Required:
apiVersion
- Specifies the API version, for example shipwright.io/v1beta1
.kind
- Specifies the Kind type, for example Build
.metadata
- Metadata that identify the custom resource instance, especially the name of the Build
, and in which namespace you place it. Note: You should use your own namespace, and not put your builds into the shipwright-build namespace where Shipwright’s system components run.spec.source
- Refers to the location of the source code, for example a Git repository or OCI artifact image.spec.strategy
- Refers to the BuildStrategy
to be used, see the examplesspec.output
- Refers to the location where the generated image would be pushed.spec.output.pushSecret
- Reference an existing secret to get access to the container registry.
Optional:
spec.paramValues
- Refers to a name-value(s) list to specify values for parameters
defined in the BuildStrategy
.spec.timeout
- Defines a custom timeout. The value needs to be parsable by ParseDuration, for example, 5m
. The default is ten minutes. You can overwrite the value in the BuildRun
.spec.output.annotations
- Refers to a list of key/value
that could be used to annotate the output image.spec.output.labels
- Refers to a list of key/value
that could be used to label the output image.spec.output.timestamp
- Instruct the build to change the output image creation timestamp to the specified value. When omitted, the respective build strategy tool defines the output image timestamp.- Use string
Zero
to set the image timestamp to UNIX epoch timestamp zero. - Use string
SourceTimestamp
to set the image timestamp to the source timestamp, i.e. the timestamp of the Git commit that was used. - Use string
BuildTimestamp
to set the image timestamp to the timestamp of the build run. - Use any valid UNIX epoch seconds number as a string to set this as the image timestamp.
spec.env
- Specifies additional environment variables that should be passed to the build container. The available variables depend on the tool that is being used by the chosen build strategy.spec.retention.atBuildDeletion
- Defines if all related BuildRuns needs to be deleted when deleting the Build. The default is false.spec.retention.ttlAfterFailed
- Specifies the duration for which a failed buildrun can exist.spec.retention.ttlAfterSucceeded
- Specifies the duration for which a successful buildrun can exist.spec.retention.failedLimit
- Specifies the number of failed buildrun that can exist.spec.retention.succeededLimit
- Specifies the number of successful buildrun can exist.
Defining the Source
A Build
resource can specify a source type, such as a Git repository or an OCI artifact, together with other parameters like:
source.type
- Specify the type of the data-source. Currently, the supported types are “Git”, “OCIArtifact”, and “Local”.source.git.url
- Specify the source location using a Git repository.source.git.cloneSecret
- For private repositories or registries, the name references a secret in the namespace that contains the SSH private key or Docker access credentials, respectively.source.git.revision
- A specific revision to select from the source repository, this can be a commit, tag or branch name. If not defined, it will fallback to the Git repository default branch.source.contextDir
- For repositories where the source code is not located at the root folder, you can specify this path here.
By default, the Build controller does not validate that the Git repository exists. If the validation is desired, users can explicitly define the build.shipwright.io/verify.repository
annotation with true
. For example:
Example of a Build
with the build.shipwright.io/verify.repository annotation to enable the spec.source.git.url
validation.
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildah-golang-build
annotations:
build.shipwright.io/verify.repository: "true"
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
contextDir: docker-build
Note: The Build controller only validates two scenarios. The first one is when the endpoint uses an http/https
protocol. The second one is when an ssh
protocol such as git@
has been defined but a referenced secret, such as source.git.cloneSecret
, has not been provided.
Example of a Build
with a source with credentials defined by the user.
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildpack-nodejs-build
spec:
source:
type: Git
git:
url: https://github.com/sclorg/nodejs-ex
cloneSecret: source-repository-credentials
Example of a Build
with a source that specifies a specific subfolder on the repository.
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildah-custom-context-dockerfile
spec:
source:
type: Git
git:
url: https://github.com/SaschaSchwarze0/npm-simple
contextDir: renamed
Example of a Build
that specifies the tag v0.1.0
for the git repository:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildah-golang-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
revision: v0.1.0
contextDir: docker-build
Example of a Build
that specifies environment variables:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildah-golang-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
contextDir: docker-build
env:
- name: EXAMPLE_VAR_1
value: "example-value-1"
- name: EXAMPLE_VAR_2
value: "example-value-2"
Example of a Build
that uses the Kubernetes Downward API to expose a Pod
field as an environment variable:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildah-golang-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
contextDir: docker-build
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Example of a Build
that uses the Kubernetes Downward API to expose a Container
field as an environment variable:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildah-golang-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
contextDir: docker-build
env:
- name: MEMORY_LIMIT
valueFrom:
resourceFieldRef:
containerName: my-container
resource: limits.memory
Defining the Strategy
A Build
resource can specify the BuildStrategy
to use, these are:
Defining the strategy is straightforward. You define the name
and the kind
. For example:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildpack-nodejs-build
spec:
strategy:
name: buildpacks-v3
kind: ClusterBuildStrategy
Defining ParamValues
A Build
resource can specify paramValues for parameters that are defined in the referenced BuildStrategy
. You specify these parameter values to control how the steps of the build strategy behave. You can overwrite values in the BuildRun
resource. See the related documentation for more information.
The build strategy author can define a parameter as either a simple string or an array. Depending on that, you must specify the value accordingly. The build strategy parameter can be specified with a default value. You must specify a value in the Build
or BuildRun
for parameters without a default.
You can either specify values directly or reference keys from ConfigMaps and Secrets. Note: the usage of ConfigMaps and Secrets is limited by the usage of the parameter in the build strategy steps. You can only use them if the parameter is used in the command, arguments, or environment variable values.
When using paramValues, users should avoid:
- Defining a
spec.paramValues
name that doesn’t match one of the spec.parameters
defined in the BuildStrategy
. - Defining a
spec.paramValues
name that collides with the Shipwright reserved parameters. These are BUILDER_IMAGE, DOCKERFILE, CONTEXT_DIR, and any name starting with shp-.
In general, paramValues are tightly bound to Strategy parameters. Please make sure you understand the contents of your strategy of choice before defining paramValues in the Build.
Example
The BuildKit sample BuildStrategy
contains various parameters. Two of them are outlined here:
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
name: buildkit
...
spec:
parameters:
- name: build-args
description: "The ARG values in the Dockerfile. Values must be in the format KEY=VALUE."
type: array
defaults: []
- name: cache
description: "Configure BuildKit's cache usage. Allowed values are 'disabled' and 'registry'. The default is 'registry'."
type: string
default: registry
...
steps:
...
The cache
parameter is a simple string. You can provide it like this in your Build:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: a-build
namespace: a-namespace
spec:
paramValues:
- name: cache
value: disabled
strategy:
name: buildkit
kind: ClusterBuildStrategy
source:
...
output:
...
If you have multiple Builds and want to control this parameter centrally, then you can create a ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: buildkit-configuration
namespace: a-namespace
data:
cache: disabled
You reference the ConfigMap as a parameter value like this:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: a-build
namespace: a-namespace
spec:
paramValues:
- name: cache
configMapValue:
name: buildkit-configuration
key: cache
strategy:
name: buildkit
kind: ClusterBuildStrategy
source:
...
output:
...
The build-args
parameter is defined as an array. In the BuildKit strategy, you use build-args
to set the ARG
values in the Dockerfile, specified as key-value pairs separated by an equals sign, for example, NODE_VERSION=16
. Your Build then looks like this (the value for cache
is retained to outline how multiple paramValue can be set):
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: a-build
namespace: a-namespace
spec:
paramValues:
- name: cache
configMapValue:
name: buildkit-configuration
key: cache
- name: build-args
values:
- value: NODE_VERSION=16
strategy:
name: buildkit
kind: ClusterBuildStrategy
source:
...
output:
...
Like simple values, you can also reference ConfigMaps and Secrets for every item in the array. Example:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: a-build
namespace: a-namespace
spec:
paramValues:
- name: cache
configMapValue:
name: buildkit-configuration
key: cache
- name: build-args
values:
- configMapValue:
name: project-configuration
key: node-version
format: NODE_VERSION=${CONFIGMAP_VALUE}
- value: DEBUG_MODE=true
- secretValue:
name: npm-registry-access
key: npm-auth-token
format: NPM_AUTH_TOKEN=${SECRET_VALUE}
strategy:
name: buildkit
kind: ClusterBuildStrategy
source:
...
output:
...
Here, we pass three items in the build-args
array:
- The first item references a ConfigMap. Because the ConfigMap just contains the value (for example
"16"
) as the data of the node-version
key, the format
setting is used to prepend NODE_VERSION=
to make it a complete key-value pair. - The second item is just a hard-coded value.
- The third item references a Secret, the same as with ConfigMaps.
Note: The logging output of BuildKit contains expanded ARG
s in RUN
commands. Also, such information ends up in the final container image if you use such args in the final stage of your Dockerfile. An alternative approach to pass secrets is using secret mounts. The BuildKit sample strategy supports them using the secrets
parameter.
Defining the Builder or Dockerfile
In the Build
resource, you use the parameters (spec.paramValues
) to specify the image that contains the tools to build the final image. For example, the following Build definition specifies a Dockerfile
image.
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: buildah-golang-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
contextDir: docker-build
strategy:
name: buildah
kind: ClusterBuildStrategy
paramValues:
- name: dockerfile
value: Dockerfile
Another example is when the user chooses the builder
image for a specific language as part of the source-to-image
buildStrategy:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: s2i-nodejs-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-nodejs
contextDir: source-build/
strategy:
name: source-to-image
kind: ClusterBuildStrategy
paramValues:
- name: builder-image
value: "docker.io/centos/nodejs-10-centos7"
Defining the Output
A Build
resource can specify the output where it should push the image. For external private registries, it is recommended to specify a secret with the related data to access it. An option is available to specify the annotation and labels for the output image. The annotations and labels mentioned here are specific to the container image and do not relate to the Build
annotations. Analogous, the timestamp refers to the timestamp of the output image.
Note: When you specify annotations, labels, or timestamp, the output image may get pushed twice, depending on the respective strategy. For example, strategies that push the image to the registry as part of their build step will lead to an additional push of the image in case image processing like labels is configured. If you have automation based on push events in your container registry, be aware of this behavior.
For example, the user specifies a public registry:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: s2i-nodejs-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-nodejs
contextDir: source-build/
strategy:
name: source-to-image
kind: ClusterBuildStrategy
paramValues:
- name: builder-image
value: "docker.io/centos/nodejs-10-centos7"
output:
image: image-registry.openshift-image-registry.svc:5000/build-examples/nodejs-ex
Another example is when the user specifies a private registry:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: s2i-nodejs-build
spec:
source:
git:
url: https://github.com/shipwright-io/sample-nodejs
contextDir: source-build/
strategy:
name: source-to-image
kind: ClusterBuildStrategy
paramValues:
- name: builder-image
value: "docker.io/centos/nodejs-10-centos7"
output:
image: us.icr.io/source-to-image-build/nodejs-ex
pushSecret: icr-knbuild
Example of user specifies image annotations and labels:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: s2i-nodejs-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-nodejs
contextDir: source-build/
strategy:
name: source-to-image
kind: ClusterBuildStrategy
paramValues:
- name: builder-image
value: "docker.io/centos/nodejs-10-centos7"
output:
image: us.icr.io/source-to-image-build/nodejs-ex
pushSecret: icr-knbuild
annotations:
"org.opencontainers.image.source": "https://github.com/org/repo"
"org.opencontainers.image.url": "https://my-company.com/images"
labels:
"maintainer": "team@my-company.com"
"description": "This is my cool image"
Example of user specified image timestamp set to SourceTimestamp
to set the output timestamp to match the timestamp of the Git commit used for the build:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: sample-go-build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
contextDir: source-build
strategy:
name: buildkit
kind: ClusterBuildStrategy
output:
image: some.registry.com/namespace/image:tag
pushSecret: credentials
timestamp: SourceTimestamp
Annotations added to the output image can be verified by running the command:
docker manifest inspect us.icr.io/source-to-image-build/nodejs-ex | jq ".annotations"
You can verify which labels were added to the output image that is available on the host machine by running the command:
docker inspect us.icr.io/source-to-image-build/nodejs-ex | jq ".[].Config.Labels"
Defining Retention Parameters
A Build
resource can specify how long a completed BuildRun can exist and the number of buildruns that have failed or succeeded that should exist. Instead of manually cleaning up old BuildRuns, retention parameters provide an alternate method for cleaning up BuildRuns automatically.
As part of the retention parameters, we have the following fields:
retention.atBuildDeletion
- Defines if all related BuildRuns needs to be deleted when deleting the Build. The default is false.retention.succeededLimit
- Defines number of succeeded BuildRuns for a Build that can exist.retention.failedLimit
- Defines number of failed BuildRuns for a Build that can exist.retention.ttlAfterFailed
- Specifies the duration for which a failed buildrun can exist.retention.ttlAfterSucceeded
- Specifies the duration for which a successful buildrun can exist.
An example of a user using both TTL and Limit retention fields. In case of such a configuration, BuildRun will get deleted once the first criteria is met.
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: build-retention-ttl
spec:
source:
type: Git
git:
url: "https://github.com/shipwright-io/sample-go"
contextDir: docker-build
strategy:
kind: ClusterBuildStrategy
output:
...
retention:
ttlAfterFailed: 30m
ttlAfterSucceeded: 1h
failedLimit: 10
succeededLimit: 20
Note: When changes are made to retention.failedLimit
and retention.succeededLimit
values, they come into effect as soon as the build is applied, thereby enforcing the new limits. On the other hand, changing the retention.ttlAfterFailed
and retention.ttlAfterSucceeded
values will only affect new buildruns. Old buildruns will adhere to the old TTL retention values. In case TTL values are defined in buildrun specifications as well as build specifications, priority will be given to the values defined in the buildrun specifications.
Defining Volumes
Builds
can declare volumes
. They must override volumes
defined by the according BuildStrategy
. If a volume
is not overridable
then the BuildRun
will eventually fail.
Volumes
follow the declaration of Pod Volumes, so
all the usual volumeSource
types are supported.
Here is an example of Build
object that overrides volumes
:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: build-name
spec:
source:
type: Git
git:
url: https://github.com/example/url
strategy:
name: buildah
kind: ClusterBuildStrategy
paramValues:
- name: dockerfile
value: Dockerfile
output:
image: registry/namespace/image:latest
volumes:
- name: volume-name
configMap:
name: test-config
Defining Triggers
Using the triggers, you can submit BuildRun
instances when certain events happen. The idea is to be able to trigger Shipwright builds in an event driven fashion, for that purpose you can watch certain types of events.
Note: triggers rely on the Shipwright Triggers project to be deployed and configured in the same Kubernetes cluster where you run Shipwright Build. If it is not set up, the triggers defined in a Build are ignored.
The types of events under watch are defined on the .spec.trigger
attribute, please consider the following example:
apiVersion: shipwright.io/v1beta1
kind: Build
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go
cloneSecret: webhook-secret
contextDir: docker-build
trigger:
when: []
Certain types of events will use attributes defined on .spec.source
to complete the information needed in order to dispatch events.
GitHub
The GitHub type is meant to react upon events coming from GitHub WebHook interface, the events are compared against the existing Build
resources, and therefore it can identify the Build
objects based on .spec.source.git.url
combined with the attributes on .spec.trigger.when[].github
.
To identify a given Build
object, the first criteria is the repository URL, and then the branch name listed on the GitHub event payload must also match. Following the criteria:
- First, the branch name is checked against the
.spec.trigger.when[].github.branches
entries - If the
.spec.trigger.when[].github.branches
is empty, the branch name is compared against .spec.source.git.revision
- If
spec.source.git.revision
is empty, the default revision name is used (“main”)
The following snippet shows a configuration matching Push
and PullRequest
events on the main
branch, for example:
# [...]
spec:
source:
git:
url: https://github.com/shipwright-io/sample-go
trigger:
when:
- name: push and pull-request on the main branch
type: GitHub
github:
events:
- Push
- PullRequest
branches:
- main
Image
In order to watch over images, in combination with the Image controller, you can trigger new builds when those container image names change.
For instance, lets imagine the image named ghcr.io/some/base-image
is used as input for the Build process and every time it changes we would like to trigger a new build. Please consider the following snippet:
# [...]
spec:
trigger:
when:
- name: watching for the base-image changes
type: Image
image:
names:
- ghcr.io/some/base-image:latest
Tekton Pipeline
Shipwright can also be used in combination with Tekton Pipeline, you can configure the Build to watch for Pipeline
resources in Kubernetes reacting when the object reaches the desired status (.objectRef.status
), and is identified either by its name (.objectRef.name
) or a label selector (.objectRef.selector
). The example below uses the label selector approach:
# [...]
spec:
trigger:
when:
- name: watching over for the Tekton Pipeline
type: Pipeline
objectRef:
status:
- Succeeded
selector:
label: value
While the next snippet uses the object name for identification:
# [...]
spec:
trigger:
when:
- name: watching over for the Tekton Pipeline
type: Pipeline
objectRef:
status:
- Succeeded
name: tekton-pipeline-name
BuildRun Deletion
A Build
can automatically delete a related BuildRun
. To enable this feature set the spec.retention.atBuildDeletion
to true
in the Build
instance. The default value is set to false
. See an example of how to define this field:
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: kaniko-golang-build
spec:
retention:
atBuildDeletion: true
# [...]
2.2 - BuildStrategies
Overview
There are two types of strategies, the ClusterBuildStrategy
(clusterbuildstrategies.shipwright.io/v1beta1
) and the BuildStrategy
(buildstrategies.shipwright.io/v1beta1
). Both strategies define a shared group of steps, needed to fullfil the application build.
A ClusterBuildStrategy
is available cluster-wide, while a BuildStrategy
is available within a namespace.
Available ClusterBuildStrategies
Well-known strategies can be bootstrapped from here. The currently supported Cluster BuildStrategy are:
Available BuildStrategies
The current supported namespaces BuildStrategy are:
Buildah
The buildah
ClusterBuildStrategy uses buildah
to build and push a container image, out of a Dockerfile
. The Dockerfile
should be specified on the Build
resource.
The strategy is available in two formats:
Learn more about the differences of shipwright-, or strategy-managed push
Installing Buildah Strategy
To install use:
kubectl apply -f samples/v1beta1/buildstrategy/buildah/buildstrategy_buildah_shipwright_managed_push_cr.yaml
kubectl apply -f samples/v1beta1/buildstrategy/buildah/buildstrategy_buildah_strategy_managed_push_cr.yaml
Buildpacks v3
The buildpacks-v3 BuildStrategy/ClusterBuildStrategy uses a Cloud Native Builder (CNB) container image, and is able to implement lifecycle commands.
Installing Buildpacks v3 Strategy
You can install the BuildStrategy
in your namespace or install the ClusterBuildStrategy
at cluster scope so that it can be shared across namespaces.
To install the cluster scope strategy, you can chose between the Paketo and Heroku buildpacks family:
# Paketo
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml
# Heroku
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml
To install the namespaced scope strategy, you can chose between the Paketo and Heroku buildpacks family:
# Paketo
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_namespaced_cr.yaml
# Heroku
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml
Kaniko
The kaniko
ClusterBuildStrategy is composed by Kaniko’s executor
kaniko, with the objective of building a container-image, out of a Dockerfile
and context directory. The kaniko-trivy
ClusterBuildStrategy adds trivy scanning and refuses to push images with critical vulnerabilities.
Installing Kaniko Strategy
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml
Scanning with Trivy
You can also incorporate scanning into the ClusterBuildStrategy. The kaniko-trivy
ClusterBuildStrategy builds the image with kaniko
, then scans with trivy. The BuildRun will then exit with an error if there is a critical vulnerability, instead of pushing the vulnerable image into the container registry.
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/kaniko/buildstrategy_kaniko-trivy_cr.yaml
Note: doing image scanning is not a substitute for trusting the Dockerfile you are building. The build process itself is also susceptible if the Dockerfile has a vulnerability. Frameworks/strategies such as build-packs or source-to-image (which avoid directly building a Dockerfile) should be considered if you need guardrails around the code you want to build.
BuildKit
BuildKit is composed of the buildctl
client and the buildkitd
daemon. For the buildkit
ClusterBuildStrategy, it runs on a daemonless mode, where both client and ephemeral daemon run in a single container. In addition, it runs without privileges (rootless).
Cache Exporters
By default, the buildkit
ClusterBuildStrategy will use caching to optimize the build times. When pushing an image to a registry, it will use the inline export cache, which appends cache information to the image that is built. Please refer to export-cache docs for more information. Caching can be disabled by setting the cache
parameter to "disabled"
. See Defining ParamValues for more information.
Build-args and secrets
The sample build strategy contains array parameters to set values for ARG
s in your Dockerfile, and for mounts with type=secret. The parameter names are build-args
and secrets
. Defining ParamValues contains example usage.
The sample build strategy contains a platforms
array parameter that you can set to leverage BuildKit’s support to build multi-platform images. If you do not set this value, the image is built for the platform that is supported by the FROM
image. If that image supports multiple platforms, then the image will be built for the platform of your Kubernetes node.
Known Limitations
The buildkit
ClusterBuildStrategy currently locks the following parameters:
- To allow running rootless, it requires both AppArmor as well as SecComp to be disabled using the
unconfined
profile.
Usage in Clusters with Pod Security Standards
The BuildKit strategy contains fields with regards to security settings. It therefore depends on the respective cluster setup and administrative configuration. These settings are:
- Defining the
unconfined
profile for both AppArmor and seccomp as required by the underlying rootlesskit
. - The
allowPrivilegeEscalation
settings is set to true
to be able to use binaries that have the setuid
bit set in order to run with “root” level privileges. In case of BuildKit, this is required by rootlesskit
in order to set the user namespace mapping file /proc/<pid>/uid_map
. - Use of non-root user with UID 1000/GID 1000 as the
runAsUser
.
These settings have no effect in case Pod Security Standards are not used.
Please note: At this point in time, there is no way to run rootlesskit
to start the BuildKit daemon without the allowPrivilegeEscalation
flag set to true
. Clusters with the Restricted
security standard in place will not be able to use this build strategy.
Installing BuildKit Strategy
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/buildkit/buildstrategy_buildkit_cr.yaml
ko
The ko
ClusterBuilderStrategy is using ko’s publish command to build an image from a Golang main package.
Installing ko Strategy
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/ko/buildstrategy_ko_cr.yaml
Parameters
The build strategy provides the following parameters that you can set in a Build or BuildRun to control its behavior:
Parameter | Description | Default |
---|
go-flags | Value for the GOFLAGS environment variable. | Empty |
go-version | Version of Go, must match a tag from the golang image | 1.21 |
ko-version | Version of ko, must be either latest for the newest release, or a ko release name | latest |
package-directory | The directory inside the context directory containing the main package. | . |
target-platform | Target platform to be built. For example: linux/arm64 . Multiple platforms can be provided separated by comma, for example: linux/arm64,linux/amd64 . The value all will build all platforms supported by the base image. The value current will build the platform on which the build runs. | current |
Volumes
Volume | Description |
---|
gocache | Volume to contain the GOCACHE. Can be set to a persistent volume to optimize compilation performance for rebuilds. The default is an emptyDir volume which means that the cached data is discarded at the end of a BuildRun. |
Source to Image
This BuildStrategy is composed by source-to-image
and kaniko
in order to generate a Dockerfile
and prepare the application to be built later on with a builder.
s2i
requires a specially crafted image, which can be informed as builderImage
parameter on the Build
resource.
Installing Source to Image Strategy
To install the cluster scope strategy use:
kubectl apply -f samples/v1beta1/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml
Build Steps
s2i
in order to generate a Dockerfile
and prepare source-code for image build;kaniko
to create and push the container image to what is defined as output.image
;
Strategy parameters
Strategy parameters allow users to parameterize their strategy definition, by allowing users to control the parameters values via the Build
or BuildRun
resources.
Users defining parameters under their strategies require to understand the following:
Definition: A list of parameters should be defined under spec.parameters
. Each list item should consist of a name, a description, a type (either "array"
or "string"
) and optionally a default value (for type=string), or defaults values (for type=array). If no default(s) are provided, then the user must define a value in the Build or BuildRun.
Usage: In order to use a parameter in the strategy steps, use the following syntax for type=string: $(params.your-parameter-name)
. String parameters can be used in all places in the buildSteps
. Some example scenarios are:
image
: to use a custom tag, for example golang:$(params.go-version)
as it is done in the ko sample build strategy)args
: to pass data into your builder commandenv
: to force a user to provide a value for an environment variable.
Arrays are referenced using $(params.your-array-parameter-name[*])
, and can only be used in as the value for args
or command
because the defined as arrays by Kubernetes. For every item in the array, an arg will be set. For example, if you specify this in your build strategy step:
spec:
parameters:
- name: tool-args
description: Parameters for the tool
type: array
steps:
- name: a-step
command:
- some-tool
args:
- $(params.tool-args[*])
If the build user sets the value of tool-args to ["–some-arg", “some-value”], then the Pod will contain these args:
spec:
containers:
- name: a-step
args:
...
- --some-arg
- some-value
Parameterize: Any Build
or BuildRun
referencing your strategy, can set a value for your-parameter-name parameter if needed.
Note: Users can provide parameter values as simple strings or as references to keys in ConfigMaps and Secrets. If they use a ConfigMap or Secret, then the value can only be used if the parameter is used in the command
, args
, or env
section of the buildSteps
. For example, the above mentioned scenario to set a step’s image
to golang:$(params.go-version)
does not allow the usage of ConfigMaps or Secrets.
The following example is from the BuildKit sample build strategy. It defines and uses several parameters:
---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
name: buildkit
...
spec:
parameters:
- name: build-args
description: "The values for the ARGs in the Dockerfile. Values must be in the format KEY=VALUE."
type: array
defaults: []
- name: cache
description: "Configure BuildKit's cache usage. Allowed values are 'disabled' and 'registry'. The default is 'registry'."
type: string
default: registry
- name: insecure-registry
type: string
description: "enables the push to an insecure registry"
default: "false"
- name: secrets
description: "The secrets to pass to the build. Values must be in the format ID=FILE_CONTENT."
type: array
defaults: []
- name: dockerfile
description: The path to the Dockerfile to be used for building the image.
type: string
default: "Dockerfile"
steps:
...
- name: build-and-push
image: moby/buildkit:nightly-rootless
imagePullPolicy: Always
workingDir: $(params.shp-source-root)
...
command:
- /bin/ash
args:
- -c
- |
set -euo pipefail
# Prepare the file arguments
DOCKERFILE_PATH='$(params.shp-source-context)/$(params.dockerfile)'
DOCKERFILE_DIR="$(dirname "${DOCKERFILE_PATH}")"
DOCKERFILE_NAME="$(basename "${DOCKERFILE_PATH}")"
# We only have ash here and therefore no bash arrays to help add dynamic arguments (the build-args) to the build command.
echo "#!/bin/ash" > /tmp/run.sh
echo "set -euo pipefail" >> /tmp/run.sh
echo "buildctl-daemonless.sh \\" >> /tmp/run.sh
echo "build \\" >> /tmp/run.sh
echo "--progress=plain \\" >> /tmp/run.sh
echo "--frontend=dockerfile.v0 \\" >> /tmp/run.sh
echo "--opt=filename=\"${DOCKERFILE_NAME}\" \\" >> /tmp/run.sh
echo "--local=context='$(params.shp-source-context)' \\" >> /tmp/run.sh
echo "--local=dockerfile=\"${DOCKERFILE_DIR}\" \\" >> /tmp/run.sh
echo "--output=type=image,name='$(params.shp-output-image)',push=true,registry.insecure=$(params.insecure-registry) \\" >> /tmp/run.sh
if [ "$(params.cache)" == "registry" ]; then
echo "--export-cache=type=inline \\" >> /tmp/run.sh
echo "--import-cache=type=registry,ref='$(params.shp-output-image)' \\" >> /tmp/run.sh
elif [ "$(params.cache)" == "disabled" ]; then
echo "--no-cache \\" >> /tmp/run.sh
else
echo -e "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'."
echo -n "InvalidParameterValue" > '$(results.shp-error-reason.path)'
echo -n "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'." > '$(results.shp-error-message.path)'
exit 1
fi
stage=""
for a in "$@"
do
if [ "${a}" == "--build-args" ]; then
stage=build-args
elif [ "${a}" == "--secrets" ]; then
stage=secrets
elif [ "${stage}" == "build-args" ]; then
echo "--opt=\"build-arg:${a}\" \\" >> /tmp/run.sh
elif [ "${stage}" == "secrets" ]; then
# Split ID=FILE_CONTENT into variables id and data
# using head because the data could be multiline
id="$(echo "${a}" | head -1 | sed 's/=.*//')"
# This is hacky, we remove the suffix ${id}= from all lines of the data.
# If the data would be multiple lines and a line would start with ${id}=
# then we would remove it. We could force users to give us the secret
# base64 encoded. But ultimately, the best solution might be if the user
# mounts the secret and just gives us the path here.
data="$(echo "${a}" | sed "s/^${id}=//")"
# Write the secret data into a temporary file, once we have volume support
# in the build strategy, we should use a memory based emptyDir for this.
echo -n "${data}" > "/tmp/secret_${id}"
# Add the secret argument
echo "--secret id=${id},src="/tmp/secret_${id}" \\" >> /tmp/run.sh
fi
done
echo "--metadata-file /tmp/image-metadata.json" >> /tmp/run.sh
chmod +x /tmp/run.sh
/tmp/run.sh
# Store the image digest
sed -E 's/.*containerimage.digest":"([^"]*).*/\1/' < /tmp/image-metadata.json > '$(results.shp-image-digest.path)'
# That's the separator between the shell script and its args
- --
- --build-args
- $(params.build-args[*])
- --secrets
- $(params.secrets[*])
See more information on how to use these parameters in a Build
or BuildRun
in the related documentation.
System parameters
Contrary to the strategy spec.parameters
, you can use system parameters and their values defined at runtime when defining the steps of a build strategy to access system information as well as information provided by the user in their Build or BuildRun. The following parameters are available:
Parameter | Description |
---|
$(params.shp-source-root) | The absolute path to the directory that contains the user’s sources. |
$(params.shp-source-context) | The absolute path to the context directory of the user’s sources. If the user specified no value for spec.source.contextDir in their Build , then this value will equal the value for $(params.shp-source-root) . Note that this directory is not guaranteed to exist at the time the container for your step is started, you can therefore not use this parameter as a step’s working directory. |
$(params.shp-output-directory) | The absolute path to a directory that the build strategy should store the image in. You can store a single tarball containing a single image, or an OCI image layout. |
$(params.shp-output-image) | The URL of the image that the user wants to push, as specified in the Build’s spec.output.image or as an override from the BuildRun’s spec.output.image . |
$(params.shp-output-insecure) | A flag that indicates the output image’s registry location is insecure because it uses a certificate not signed by a certificate authority, or uses HTTP. |
Output directory vs. output image
As a build strategy author, you decide whether your build strategy or Shipwright pushes the build image to the container registry:
- If you DO NOT use
$(params.shp-output-directory)
, then Shipwright assumes that your build strategy PUSHES the image. We call this a strategy-managed push. - If you DO use
$(params.shp-output-directory)
, then Shipwright assumes that your build strategy does NOT PUSH the image. We call this a shipwright-managed push.
When you use the $(params.shp-output-directory)
parameter, then Shipwright will also set the image-related system results.
If you are uncertain about how to implement your build strategy, then follow this guidance:
- If your build strategy tool cannot locally store an image but always pushes it, then you must do the push operation. An example is the Buildpacks strategy. You SHOULD respect the
$(params.shp-output-insecure)
parameter. - If your build strategy tool can locally store an image, then the choice depends on how you expect your build users to make use of your strategy, and the nature of your strategy.
- Some build strategies do not produce all layers of an image, but use a common base image and put one or more layers on top with the application. An example is
ko
. Such base image layers are often already present in the destination registry (like in rebuilds). If the strategy can perform the push operation, then it can optimize the process and can omit the download of the base image when it is not required to push it. In the case of a shipwright-managed push, the complete image must be locally stored in $(params.shp-output-directory)
, which implies that a base image must always be downloaded. - Some build strategy tools do not make it easy to determine the digest or size of the image, which can make it complex for your to set the strategy results. In the case of a shipwright-managed push, Shipwright has the responsibility to set them.
- Build users can configure the build to amend additional annotations, or labels to the final image. In the case of a shipwright-managed push, these can be set directly and the image will only be pushed once. In a strategy-managed push scenario, your build strategy will push the first version of the image without those annotations and labels. Shipwright will then mutate the image and push it again with the updated annotations and labels. Such a duplicate push can cause unexpected behavior with registries that trigger other actions when an image gets pushed, or that do not allow overwriting a tag.
- The Shipwright maintainers plan to provide more capabilities in the future that need the image locally, such as vulnerability scanning, or software bill of material (SBOM) creation. These capabilities may be only fully supported with shipwright-managed push.
System parameters vs Strategy Parameters Comparison
Parameter Type | User Configurable | Definition |
---|
System Parameter | No | At run-time, by the BuildRun controller. |
Strategy Parameter | Yes | At build-time, during the BuildStrategy creation. |
Securely referencing string parameters
In build strategy steps, string parameters are referenced using $(params.PARAM_NAME)
. This applies to system parameters, and those parameters defined in the build strategy. You can reference those parameters at many locations in the build steps, such as environment variables values, arguments, image, and more. In the Pod, all $(params.PARAM_NAME)
tokens will be replaced by simple string replaces. This is safe in most locations but requires your attention when you define an inline script using an argument. For example:
spec:
parameters:
- name: sample-parameter
description: A sample parameter
type: string
steps:
- name: sample-step
command:
- /bin/bash
args:
- -c
- |
set -euo pipefail
some-tool --sample-argument "$(params.sample-parameter)"
This opens the door to script injection, for example if the user sets the sample-parameter
to argument-value" && malicious-command && echo "
, the resulting pod argument will look like this:
- |
set -euo pipefail
some-tool --sample-argument "argument-value" && malicious-command && echo ""
To securely pass a parameter value into a script-style argument, you can chose between these two approaches:
Using environment variables. This is used in some of our sample strategies, for example ko, or buildpacks. Basically, instead of directly using the parameter inside the script, you pass it via environment variable. Using quoting, shells ensure that no command injection is possible:
spec:
parameters:
- name: sample-parameter
description: A sample parameter
type: string
steps:
- name: sample-step
env:
- name: PARAM_SAMPLE_PARAMETER
value: $(params.sample-parameter)
command:
- /bin/bash
args:
- -c
- |
set -euo pipefail
some-tool --sample-argument "${PARAM_SAMPLE_PARAMETER}"
Using arguments. This is used in some of our sample build strategies, for example buildah. Here, you use arguments to your own inline script. Appropriate shell quoting guards against command injection.
spec:
parameters:
- name: sample-parameter
description: A sample parameter
type: string
steps:
- name: sample-step
command:
- /bin/bash
args:
- -c
- |
set -euo pipefail
SAMPLE_PARAMETER="$1"
some-tool --sample-argument "${SAMPLE_PARAMETER}"
- --
- $(params.sample-parameter)
System results
If you are using a strategy-managed push, see output directory vs output image, you can optionally store the size and digest of the image your build strategy created to a set of files.
Result file | Description |
---|
$(results.shp-image-digest.path) | File to store the digest of the image. |
$(results.shp-image-size.path) | File to store the compressed size of the image. |
You can look at sample build strategies, such as Buildpacks, to see how they fill some or all of the results files.
This information will be available in the .status.output
section of the BuildRun.
apiVersion: shipwright.io/v1beta1
kind: BuildRun
# [...]
status:
# [...]
output:
digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53
size: 1989004
# [...]
Additionally, you can store error details for debugging purposes when a BuildRun fails using your strategy.
Result file | Description |
---|
$(results.shp-error-reason.path) | File to store the error reason. |
$(results.shp-error-message.path) | File to store the error message. |
Reason is intended to be a one-word CamelCase classification of the error source, with the first letter capitalized.
Error details are only propagated if the build container terminates with a non-zero exit code.
This information will be available in the .status.failureDetails
section of the BuildRun.
apiVersion: shipwright.io/v1beta1
kind: BuildRun
# [...]
status:
# [...]
failureDetails:
location:
container: step-source-default
pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr
message: The source repository does not exist, or you have insufficient permission
to access it.
reason: GitRemotePrivate
Security Contexts
In a build strategy, it is recommended that you define a securityContext
with a runAsUser and runAsGroup:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
This runAs configuration will be used for all shipwright-managed steps such as the step that retrieves the source code, and for the steps you define in the build strategy. This configuration ensures that all steps share the same runAs configuration which eliminates file permission problems.
Without a securityContext
for the build strategy, shipwright-managed steps will run with the runAsUser
and runAsGroup
that is defined in the configuration’s container templates that is potentially a different user than you use in your build strategy. This can result in issues when for example source code is downloaded as user A as defined by the Git container template, but your strategy accesses it as user B.
In build strategy steps you can define a step-specific securityContext
that matches Kubernetes’ security context where you can configure other security aspects such as capabilities or privileged containers.
Steps Resource Definition
All strategies steps can include a definition of resources(limits and requests) for CPU, memory and disk. For strategies with more than one step, each step(container) could require more resources than others. Strategy admins are free to define the values that they consider the best fit for each step. Also, identical strategies with the same steps that are only different in their name and step resources can be installed on the cluster to allow users to create a build with smaller and larger resource requirements.
Strategies with different resources
If the strategy admins would require to have multiple flavours of the same strategy, where one strategy has more resources that the other. Then, multiple strategies for the same type should be defined on the cluster. In the following example, we use Kaniko as the type:
---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
name: kaniko-small
spec:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v1.21.1
workingDir: $(params.shp-source-root)
securityContext:
runAsUser: 0
capabilities:
add:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
- SETFCAP
- KILL
env:
- name: DOCKER_CONFIG
value: /tekton/home/.docker
- name: AWS_ACCESS_KEY_ID
value: NOT_SET
- name: AWS_SECRET_KEY
value: NOT_SET
command:
- /kaniko/executor
args:
- --skip-tls-verify=true
- --dockerfile=$(params.dockerfile)
- --context=$(params.shp-source-context)
- --destination=$(params.shp-output-image)
- --snapshot-mode=redo
- --push-retry=3
resources:
limits:
cpu: 250m
memory: 65Mi
requests:
cpu: 250m
memory: 65Mi
parameters:
- name: dockerfile
description: The path to the Dockerfile to be used for building the image.
type: string
default: "Dockerfile"
---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
name: kaniko-medium
spec:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v1.21.1
workingDir: $(params.shp-source-root)
securityContext:
runAsUser: 0
capabilities:
add:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
- SETFCAP
- KILL
env:
- name: DOCKER_CONFIG
value: /tekton/home/.docker
- name: AWS_ACCESS_KEY_ID
value: NOT_SET
- name: AWS_SECRET_KEY
value: NOT_SET
command:
- /kaniko/executor
args:
- --skip-tls-verify=true
- --dockerfile=$(params.dockerfile)
- --context=$(params.shp-source-context)
- --destination=$(params.shp-output-image)
- --snapshot-mode=redo
- --push-retry=3
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
parameters:
- name: dockerfile
description: The path to the Dockerfile to be used for building the image.
type: string
default: "Dockerfile"
The above provides more control and flexibility for the strategy admins. For end-users
, all they need to do, is to reference the proper strategy. For example:
---
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: kaniko-medium
spec:
source:
git:
url: https://github.com/shipwright-io/sample-go
contextDir: docker-build
strategy:
name: kaniko
kind: ClusterBuildStrategy
paramValues:
- name: dockerfile
value: Dockerfile
How does Tekton Pipelines handle resources
The Build controller relies on the Tekton pipeline controller to schedule the pods
that execute the above strategy steps. In a nutshell, the Build controller creates on run-time a Tekton TaskRun, and the TaskRun generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one.
Tekton manage each step resources request in a very particular way, see the docs. From this document, it mentions the following:
The CPU, memory, and ephemeral storage resource requests will be set to zero, or, if specified, the minimums set through LimitRanges in that Namespace, if the container image does not have the largest resource request out of all container images in the Task. This ensures that the Pod that executes the Task only requests enough resources to run a single container image in the Task rather than hoard resources for all container images in the Task at once.
Examples of Tekton resources management
For a more concrete example, let´s take a look on the following scenarios:
Scenario 1. Namespace without LimitRange
, both steps with the same resource values.
If we will apply the following resources:
We will see some differences between the TaskRun
definition and the pod
definition.
For the TaskRun
, as expected we can see the resources on each step
, as we previously define on our strategy.
$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "65Mi"
}
}
$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "65Mi"
}
}
The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:
$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"ephemeral-storage": "0",
"memory": "65Mi"
}
}
$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "0", <------------------- See how the request is set to ZERO.
"ephemeral-storage": "0", <------------------- See how the request is set to ZERO.
"memory": "0" <------------------- See how the request is set to ZERO.
}
}
In this scenario, only one container can have the spec.resources.requests
definition. Even when both steps have the same values, only one container will get them, the others will be set to zero.
Scenario 2. Namespace without LimitRange
, steps with different resources:
If we will apply the following resources:
For the TaskRun
, as expected we can see the resources on each step
.
$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "65Mi"
}
}
$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "100Mi"
}
}
The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:
$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m", <------------------- See how the CPU is preserved
"ephemeral-storage": "0",
"memory": "0" <------------------- See how the memory is set to ZERO
}
}
$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "0", <------------------- See how the CPU is set to zero.
"ephemeral-storage": "0",
"memory": "100Mi" <------------------- See how the memory is preserved on this container
}
}
In the above scenario, we can see how the maximum numbers for resource requests are distributed between containers. The container step-buildah-push
gets the 100mi
for the memory requests, while it was the one defining the highest number. At the same time, the container step-buildah-bud
is assigned a 0
for its memory request.
Scenario 3. Namespace with a LimitRange
.
When a LimitRange
exists on the namespace, Tekton Pipeline
controller will do the same approach as stated in the above two scenarios. The difference is that for the containers that have lower values, instead of zero, they will get the minimum values of the LimitRange
.
Annotations
Annotations can be defined for a BuildStrategy/ClusterBuildStrategy as for any other Kubernetes object. Annotations are propagated to the TaskRun and from there, Tekton propagates them to the Pod. Use cases for this are for example:
- The Kubernetes Network Traffic Shaping feature looks for the
kubernetes.io/ingress-bandwidth
and kubernetes.io/egress-bandwidth
annotations to limit the network bandwidth the Pod
is allowed to use. - The AppArmor profile of a container is defined using the
container.apparmor.security.beta.kubernetes.io/<container_name>
annotation.
The following annotations are not propagated:
kubectl.kubernetes.io/last-applied-configuration
clusterbuildstrategy.shipwright.io/*
buildstrategy.shipwright.io/*
build.shipwright.io/*
buildrun.shipwright.io/*
A Kubernetes administrator can further restrict the usage of annotations by using policy engines like Open Policy Agent.
Volumes and VolumeMounts
Build Strategies can declare volumes
. These volumes
can be referred to by the build steps using volumeMount
.
Volumes in Build Strategy follow the declaration of Pod Volumes, so
all the usual volumeSource
types are supported.
Volumes can be overridden by Build
s and BuildRun
s, so Build Strategies’ volumes support an overridable
flag, which
is a boolean, and is false
by default. In case volume is not overridable, Build
or BuildRun
that tries to override it,
will fail.
Build steps can declare a volumeMount
, which allows them to access volumes defined by BuildStrategy
, Build
or BuildRun
.
Here is an example of BuildStrategy
object that defines volumes
and volumeMount
s:
apiVersion: shipwright.io/v1beta1
kind: BuildStrategy
metadata:
name: buildah
spec:
steps:
- name: build
image: quay.io/containers/buildah:v1.27.0
workingDir: $(params.shp-source-root)
command:
- buildah
- bud
- --tls-verify=false
- --layers
- -f
- $(params.dockerfile)
- -t
- $(params.shp-output-image)
- $(params.shp-source-context)
volumeMounts:
- name: varlibcontainers
mountPath: /var/lib/containers
volumes:
- name: varlibcontainers
overridable: true
emptyDir: {}
# ...
2.3 - BuildRun
Overview
The resource BuildRun
(buildruns.shipwright.io/v1beta1
) is the build process of a Build
resource definition executed in Kubernetes.
A BuildRun
resource allows the user to define:
- The
BuildRun
name, through which the user can monitor the status of the image construction. - A referenced
Build
instance to use during the build construction. - A service account for hosting all related secrets to build the image.
A BuildRun
is available within a namespace.
BuildRun Controller
The controller watches for:
- Updates on a
Build
resource (CRD instance) - Updates on a
TaskRun
resource (CRD instance)
When the controller reconciles it:
- Looks for any existing owned
TaskRuns
and updates its parent BuildRun
status. - Retrieves the specified
SA
and sets this with the specify output secret on the Build
resource. - If one does not exist, it generates a new tekton
TaskRun
and sets a reference to this resource(as a child of the controller). - On any subsequent updates on the
TaskRun
, the controller will update the parent BuildRun
resource instance.
Configuring a BuildRun
The BuildRun
definition supports the following fields:
Required:
apiVersion
- Specifies the API version, for example shipwright.io/v1beta1
.kind
- Specifies the Kind type, for example BuildRun
.metadata
- Metadata that identify the CRD instance, for example the name of the BuildRun
.
Optional:
spec.build.name
- Specifies an existing Build
resource instance to use.spec.build.spec
- Specifies an embedded (transient) Build resource to use.spec.serviceAccount
- Refers to the SA to use when building the image. (defaults to the default
SA)spec.timeout
- Defines a custom timeout. The value needs to be parsable by ParseDuration, for example, 5m
. The value overwrites the value that is defined in the Build
.spec.paramValues
- Refers to a name-value(s) list to specify values for parameters
defined in the BuildStrategy
. This value overwrites values defined with the same name in the Build.spec.output.image
- Refers to a custom location where the generated image would be pushed. The value will overwrite the output.image
value defined in Build
. (Note: other properties of the output, for example, the credentials, cannot be specified in the buildRun spec. )spec.output.pushSecret
- Reference an existing secret to get access to the container registry. This secret will be added to the service account along with the ones requested by the Build
.spec.output.timestamp
- Overrides the output timestamp configuration of the referenced build to instruct the build to change the output image creation timestamp to the specified value. When omitted, the respective build strategy tool defines the output image timestamp.spec.env
- Specifies additional environment variables that should be passed to the build container. Overrides any environment variables that are specified in the Build
resource. The available variables depend on the tool used by the chosen build strategy.
Note: The spec.build.name
and spec.build.spec
are mutually exclusive. Furthermore, the overrides for timeout
, paramValues
, output
, and env
can only be combined with spec.build.name
, but not with spec.build.spec
.
Defining the Build Reference
A BuildRun
resource can reference a Build
resource, that indicates what image to build. For example:
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buildpack-nodejs-buildrun-namespaced
spec:
build:
name: buildpack-nodejs-build-namespaced
Defining the Build Specification
A complete BuildSpec
can be embedded into the BuildRun
for the build.
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: standalone-buildrun
spec:
build:
spec:
source:
type: Git
git:
url: https://github.com/shipwright-io/sample-go.git
contextDir: source-build
strategy:
kind: ClusterBuildStrategy
name: buildpacks-v3
output:
image: foo/bar:latest
Defining the Build Source
BuildRun’s support the specification of a Local type source. This is useful for working on development mode, without forcing a user to commit/push changes to their related version control system. For more information please refer to SHIP 0016 - enabling local source code.
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: local-buildrun
spec:
build:
name: a-build
source:
type: Local
local:
name: local-source
timeout: 3m
Defining ParamValues
A BuildRun
resource can define paramValues for parameters specified in the build strategy. If a value has been provided for a parameter with the same name in the Build
already, then the value from the BuildRun
will have precedence.
For example, the following BuildRun
overrides the value for sleep-time param, which is defined in the a-build Build
resource.
---
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: a-build
namespace: a-namespace
spec:
paramValues:
- name: cache
value: disabled
strategy:
name: buildkit
kind: ClusterBuildStrategy
source:
...
output:
...
---
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: a-buildrun
namespace: a-namespace
spec:
build:
name: a-build
paramValues:
- name: cache
value: registry
See more about paramValues usage in the related Build resource docs.
Defining the ServiceAccount
A BuildRun
resource can define a serviceaccount to use. Usually this SA will host all related secrets referenced on the Build
resource, for example:
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buildpack-nodejs-buildrun-namespaced
spec:
build:
name: buildpack-nodejs-build-namespaced
serviceAccount: pipeline
You can also set the value of spec.serviceAccount
to ".generate"
. This will generate the service account during runtime for you. The name of the generated service account is the same as that of the BuildRun.
Note: When the service account is not defined, the BuildRun
uses the pipeline
service account if it exists in the namespace, and falls back to the default
service account.
Defining Retention Parameters
A Buildrun
resource can specify how long a completed BuildRun can exist. Instead of manually cleaning up old BuildRuns, retention parameters provide an alternate method for cleaning up BuildRuns automatically.
As part of the buildrun retention parameters, we have the following fields:
retention.ttlAfterFailed
- Specifies the duration for which a failed buildrun can exist.retention.ttlAfterSucceeded
- Specifies the duration for which a successful buildrun can exist.
An example of a user using buildrun TTL parameters.
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buidrun-retention-ttl
spec:
build:
name: build-retention-ttl
retention:
ttlAfterFailed: 10m
ttlAfterSucceeded: 10m
Note: In case TTL values are defined in buildrun specifications as well as build specifications, priority will be given to the values defined in the buildrun specifications.
Defining Volumes
BuildRuns
can declare volumes
. They must override volumes
defined by the according BuildStrategy
. If a volume
is not overridable
then the BuildRun
will eventually fail.
In case Build
and BuildRun
that refers to this Build
override the same volume
, one that is defined in the BuildRun
is the one used eventually.
Volumes
follow the declaration of Pod Volumes, so all the usual volumeSource
types are supported.
Here is an example of BuildRun
object that overrides volumes
:
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buildrun-name
spec:
build:
name: build-name
volumes:
- name: volume-name
configMap:
name: test-config
Canceling a BuildRun
To cancel a BuildRun
that’s currently executing, update its status to mark it as canceled.
When you cancel a BuildRun
, the underlying TaskRun
is marked as canceled per the Tekton cancel TaskRun
feature.
Example of canceling a BuildRun
:
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buildpack-nodejs-buildrun-namespaced
spec:
# [...]
state: "BuildRunCanceled"
Automatic BuildRun
deletion
We have two controllers that ensure that buildruns can be deleted automatically if required. This is ensured by adding retention
parameters in either the build specifications or the buildrun specifications.
- Buildrun TTL parameters: These are used to make sure that buildruns exist for a fixed duration of time after completiion.
buildrun.spec.retention.ttlAfterFailed
: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has failed.buildrun.spec.retention.ttlAfterSucceeded
: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has succeeded.
- Build TTL parameters: These are used to make sure that related buildruns exist for a fixed duration of time after completion.
build.spec.retention.ttlAfterFailed
: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has failed.build.spec.retention.ttlAfterSucceeded
: The buildrun is deleted if the mentioned duration of time has passed and the buildrun has succeeded.
- Build Limit parameters: These are used to make sure that related buildruns exist for a fixed duration of time after completiion.
build.spec.retention.succeededLimit
- Defines number of succeeded BuildRuns for a Build that can exist.build.spec.retention.failedLimit
- Defines number of failed BuildRuns for a Build that can exist.
Specifying Environment Variables
An example of a BuildRun
that specifies environment variables:
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buildpack-nodejs-buildrun-namespaced
spec:
build:
name: buildpack-nodejs-build-namespaced
env:
- name: EXAMPLE_VAR_1
value: "example-value-1"
- name: EXAMPLE_VAR_2
value: "example-value-2"
Example of a BuildRun
that uses the Kubernetes Downward API to expose a Pod
field as an environment variable:
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buildpack-nodejs-buildrun-namespaced
spec:
build:
name: buildpack-nodejs-build-namespaced
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Example of a BuildRun
that uses the Kubernetes Downward API to expose a Container
field as an environment variable:
apiVersion: shipwright.io/v1beta1
kind: BuildRun
metadata:
name: buildpack-nodejs-buildrun-namespaced
spec:
build:
name: buildpack-nodejs-build-namespaced
env:
- name: MEMORY_LIMIT
valueFrom:
resourceFieldRef:
containerName: my-container
resource: limits.memory
BuildRun Status
The BuildRun
resource is updated as soon as the current image building status changes:
$ kubectl get buildrun buildpacks-v3-buildrun
NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME
buildpacks-v3-buildrun Unknown Pending Pending 1s
And finally:
$ kubectl get buildrun buildpacks-v3-buildrun
NAME SUCCEEDED REASON MESSAGE STARTTIME COMPLETIONTIME
buildpacks-v3-buildrun True Succeeded All Steps have completed executing 4m28s 16s
The above allows users to get an overview of the building mechanism state.
Understanding the state of a BuildRun
A BuildRun
resource stores the relevant information regarding the object’s state under status.conditions
.
Conditions allow users to quickly understand the resource state without needing to understand resource-specific details.
For the BuildRun
, we use a Condition of the type Succeeded
, which is a well-known type for resources that run to completion.
The status.conditions
hosts different fields, like status
, reason
and message
. Users can expect these fields to be populated with relevant information.
The following table illustrates the different states a BuildRun can have under its status.conditions
:
Status | Reason | CompletionTime is set | Description |
---|
Unknown | Pending | No | The BuildRun is waiting on a Pod in status Pending. |
Unknown | Running | No | The BuildRun has been validated and started to perform its work. |
Unknown | Running | No | The BuildRun has been validated and started to perform its work. |
Unknown | BuildRunCanceled | No | The user requested the BuildRun to be canceled. This results in the BuildRun controller requesting the TaskRun be canceled. Cancellation has not been done yet. |
True | Succeeded | Yes | The BuildRun Pod is done. |
False | Failed | Yes | The BuildRun failed in one of the steps. |
False | BuildRunTimeout | Yes | The BuildRun timed out. |
False | UnknownStrategyKind | Yes | The Build specified strategy Kind is unknown. (options: ClusterBuildStrategy or BuildStrategy) |
False | ClusterBuildStrategyNotFound | Yes | The referenced cluster strategy was not found in the cluster. |
False | BuildStrategyNotFound | Yes | The referenced namespaced strategy was not found in the cluster. |
False | SetOwnerReferenceFailed | Yes | Setting ownerreferences from the BuildRun to the related TaskRun failed. |
False | TaskRunIsMissing | Yes | The BuildRun related TaskRun was not found. |
False | TaskRunGenerationFailed | Yes | The generation of a TaskRun spec failed. |
False | MissingParameterValues | Yes | No value has been provided for some parameters that are defined in the build strategy without any default. Values for those parameters must be provided through the Build or the BuildRun. |
False | RestrictedParametersInUse | Yes | A value for a system parameter was provided. This is not allowed. |
False | UndefinedParameter | Yes | A value for a parameter was provided that is not defined in the build strategy. |
False | WrongParameterValueType | Yes | A value was provided for a build strategy parameter using the wrong type. The parameter is defined as array or string in the build strategy. Depending on that, you must provide values or a direct value. |
False | InconsistentParameterValues | Yes | A value for a parameter contained more than one of value , configMapValue , and secretValue . Any values including array items must only provide one of them. |
False | EmptyArrayItemParameterValues | Yes | An item inside the values of an array parameter contained none of value , configMapValue , and secretValue . Exactly one of them must be provided. Null array items are not allowed. |
False | IncompleteConfigMapValueParameterValues | Yes | A value for a parameter contained a configMapValue where the name or the value were empty. You must specify them to point to an existing ConfigMap key in your namespace. |
False | IncompleteSecretValueParameterValues | Yes | A value for a parameter contained a secretValue where the name or the value were empty. You must specify them to point to an existing Secret key in your namespace. |
False | ServiceAccountNotFound | Yes | The referenced service account was not found in the cluster. |
False | BuildRegistrationFailed | Yes | The related Build in the BuildRun is in a Failed state. |
False | BuildNotFound | Yes | The related Build in the BuildRun was not found. |
False | BuildRunCanceled | Yes | The BuildRun and underlying TaskRun were canceled successfully. |
False | BuildRunNameInvalid | Yes | The defined BuildRun name (metadata.name ) is invalid. The BuildRun name should be a valid label value. |
False | BuildRunNoRefOrSpec | Yes | BuildRun does not have either spec.build.name or spec.build.spec defined. There is no connection to a Build specification. |
False | BuildRunAmbiguousBuild | Yes | The defined BuildRun uses both spec.build.name and spec.build.spec . Only one of them is allowed at the same time. |
False | BuildRunBuildFieldOverrideForbidden | Yes | The defined BuildRun uses an override (e.g. timeout , paramValues , output , or env ) in combination with spec.build.spec , which is not allowed. Use the spec.build.spec to directly specify the respective value. |
False | PodEvicted | Yes | The BuildRun Pod was evicted from the node it was running on. See API-initiated Eviction and Node-pressure Eviction for more information. |
Note: We heavily rely on the Tekton TaskRun Conditions for populating the BuildRun ones, with some exceptions.
Understanding failed BuildRuns
To make it easier for users to understand why did a BuildRun failed, users can infer the pod and container where the failure took place from the status.failureDetails
field.
In addition, the status.conditions
hosts a compacted message under the message
field that contains the kubectl
command to trigger and retrieve the logs.
The status.failureDetails
field also includes a detailed failure reason and message, if the build strategy provides them.
Example of failed BuildRun:
# [...]
status:
# [...]
failureDetails:
location:
container: step-source-default
pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr
message: The source repository does not exist, or you have insufficient permission
to access it.
reason: GitRemotePrivate
Understanding failed git-source step
All git-related operations support error reporting via status.failureDetails
. The following table explains the possible
error reasons:
Reason | Description |
---|
GitAuthInvalidUserOrPass | Basic authentication has failed. Check your username or password. Note: GitHub requires a personal access token instead of your regular password. |
GitAuthInvalidKey | The key is invalid for the specified target. Please make sure that the Git repository exists, you have sufficient permissions, and the key is in the right format. |
GitRevisionNotFound | The remote revision does not exist. Check the revision specified in your Build. |
GitRemoteRepositoryNotFound | The source repository does not exist, or you have insufficient permissions to access it. |
GitRemoteRepositoryPrivate | You are trying to access a non-existing or private repository without having sufficient permissions to access it via HTTPS. |
GitBasicAuthIncomplete | Basic Auth incomplete: Both username and password must be configured. |
GitSSHAuthUnexpected | Credential/URL inconsistency: SSH credentials were provided, but the URL is not an SSH Git URL. |
GitSSHAuthExpected | Credential/URL inconsistency: No SSH credentials provided, but the URL is an SSH Git URL. |
GitError | The specific error reason is unknown. Check the error message for more information. |
Step Results in BuildRun Status
After completing a BuildRun
, the .status
field contains the results (.status.taskResults
) emitted from the TaskRun
steps generated by the BuildRun
controller as part of processing the BuildRun
. These results contain valuable metadata for users, like the image digest or the commit sha of the source code used for building.
The results from the source step will be surfaced to the .status.sources
, and the results from
the output step will be surfaced to the .status.output
field of a BuildRun
.
Example of a BuildRun
with surfaced results for git
source (note that the branchName
is only included if the Build does not specify any revision
):
# [...]
status:
buildSpec:
# [...]
output:
digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53
size: 1989004
sources:
- name: default
git:
commitAuthor: xxx xxxxxx
commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde
branchName: main
Another example of a BuildRun
with surfaced results for local source code(ociArtifact
) source:
# [...]
status:
buildSpec:
# [...]
output:
digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53
size: 1989004
sources:
- name: default
ociArtifact:
digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7
Note: The digest and size of the output image are only included if the build strategy provides them. See System results.
Build Snapshot
For every BuildRun controller reconciliation, the buildSpec
in the status of the BuildRun
is updated if an existing owned TaskRun
is present. During this update, a Build
resource snapshot is generated and embedded into the status.buildSpec
path of the BuildRun
. A buildSpec
is just a copy of the original Build
spec, from where the BuildRun
executed a particular image build. The snapshot approach allows developers to see the original Build
configuration.
Relationship with Tekton Tasks
The BuildRun
resource abstracts the image construction by delegating this work to the Tekton Pipeline TaskRun. Compared to a Tekton Pipeline Task, a TaskRun
runs all steps
until completion of the Task
or until a failure occurs in the Task
.
During the Reconcile, the BuildRun
controller will generate a new TaskRun
. The controller will embed in the TaskRun
Task
definition the requires steps
to execute during the execution. These steps
are defined in the strategy defined in the Build
resource, either a ClusterBuildStrategy
or a BuildStrategy
.
2.4 - Authentication during builds
The following document provides an introduction around the different authentication methods that can take place during an image build when using the Build controller.
Overview
There are two places where users might need to define authentication when building images. Authentication to a container registry is the most common one, but also users might have the need to define authentications for pulling source-code from Git. Overall, the authentication is done via the definition of secrets in which the require sensitive data will be stored.
Build Secrets Annotation
Users need to add an annotation build.shipwright.io/referenced.secret: "true"
to a build secret so that build controller can decide to take a reconcile action when a secret event (create
, update
and delete
) happens. Below is a secret example with build annotation:
apiVersion: v1
data:
.dockerconfigjson: xxxxx
kind: Secret
metadata:
annotations:
build.shipwright.io/referenced.secret: "true"
name: secret-docker
type: kubernetes.io/dockerconfigjson
This annotation will help us filter secrets which are not referenced on a Build instance. That means if a secret doesn’t have this annotation, then although event happens on this secret, Build controller will not reconcile. Being able to reconcile on secrets events allow the Build controller to re-trigger validations on the Build configuration, allowing users to understand if a dependency is missing.
If you are using kubectl
command create secrets, then you can first create build secret using kubectl create secret
command and annotate this secret using kubectl annotate secrets
. Below is an example:
kubectl -n ${namespace} create secret docker-registry example-secret --docker-server=${docker-server} --docker-username="${username}" --docker-password="${password}" --docker-email=me@here.com
kubectl -n ${namespace} annotate secrets example-secret build.shipwright.io/referenced.secret='true'
Authentication for Git
There are two ways for authenticating into Git (applies to both GitLab or GitHub): SSH and basic authentication.
SSH authentication
For the SSH authentication you must use the tekton annotations to specify the hostname(s) of the git repository providers that you use. This is github.com for GitHub, or gitlab.com for GitLab.
As seen in the following example, there are three things to notice:
- The Kubernetes secret should be of the type
kubernetes.io/ssh-auth
- The
data.ssh-privatekey
can be generated by following the command example base64 <~/.ssh/id_rsa
, where ~/.ssh/id_rsa
is the key used to authenticate into Git.
apiVersion: v1
kind: Secret
metadata:
name: secret-git-ssh-auth
annotations:
build.shipwright.io/referenced.secret: "true"
type: kubernetes.io/ssh-auth
data:
ssh-privatekey: <base64 <~/.ssh/id_rsa>
Basic authentication
The Basic authentication is very similar to the ssh one, but with the following differences:
- The Kubernetes secret should be of the type
kubernetes.io/basic-auth
- The
stringData
should host your user and password in clear text.
apiVersion: v1
kind: Secret
metadata:
name: secret-git-basic-auth
annotations:
build.shipwright.io/referenced.secret: "true"
type: kubernetes.io/basic-auth
stringData:
username: <cleartext username>
password: <cleartext password>
Usage of git secret
With the right secret in place(note: Ensure creation of secret in the proper Kubernetes namespace), users should reference it on their Build YAML definitions.
Depending on the secret type, there are two ways of doing this:
When using ssh auth, users should follow:
apiVersion: shipwright.io/v1alpha1
kind: Build
metadata:
name: buildah-golang-build
spec:
source:
url: git@gitlab.com:eduardooli/newtaxi.git
credentials:
name: secret-git-ssh-auth
When using basic auth, users should follow:
apiVersion: shipwright.io/v1alpha1
kind: Build
metadata:
name: buildah-golang-build
spec:
source:
url: https://gitlab.com/eduardooli/newtaxi.git
credentials:
name: secret-git-basic-auth
Authentication to container registries
For pushing images to private registries, users require to define a secret in their respective namespace.
Docker Hub
Follow the following command to generate your secret:
kubectl --namespace <YOUR_NAMESPACE> create secret docker-registry <CONTAINER_REGISTRY_SECRET_NAME> \
--docker-server=<REGISTRY_HOST> \
--docker-username=<USERNAME> \
--docker-password=<PASSWORD> \
--docker-email=me@here.com
kubectl --namespace <YOUR_NAMESPACE> annotate secrets <CONTAINER_REGISTRY_SECRET_NAME> build.shipwright.io/referenced.secret='true'
Notes: When generating a secret to access docker hub, the REGISTRY_HOST
value should be https://index.docker.io/v1/
, the username is the Docker ID.
Notes: The value of PASSWORD
can be your user docker hub password, or an access token. A docker access token can be created via Account Settings, then Security in the sidebar, and the New Access Token button.
Usage of registry secret
With the right secret in place (note: Ensure creation of secret in the proper Kubernetes namespace), users should reference it on their Build YAML definitions.
For container registries, the secret should be placed under the spec.output.credentials
path.
apiVersion: shipwright.io/v1alpha1
kind: Build
metadata:
name: buildah-golang-build
...
output:
image: docker.io/foobar/sample:latest
credentials:
name: <CONTAINER_REGISTRY_SECRET_NAME>
References
See more information in the official Tekton documentation for authentication.
2.5 - Configuration
Controller Settings
The controller is installed into Kubernetes with reasonable defaults. However, there are some settings that can be overridden using environment variables in controller.yaml
.
The following environment variables are available:
Environment Variable | Description |
---|
CTX_TIMEOUT | Override the default context timeout used for all Custom Resource Definition reconciliation operations. Default is 5 (seconds). |
REMOTE_ARTIFACTS_CONTAINER_IMAGE | Specify the container image used for the .spec.sources remote artifacts download, by default it uses quay.io/quay/busybox:latest . |
TERMINATION_LOG_PATH | Path of the termination log. This is where controller application will write the reason of its termination. Default value is /dev/termination-log . |
GIT_ENABLE_REWRITE_RULE | Enable Git wrapper to setup a URL insteadOf Git config rewrite rule for the respective source URL hostname. Default is false . |
GIT_CONTAINER_TEMPLATE | JSON representation of a Container template that is used for steps that clone a Git repository. Default is {"image": "ghcr.io/shipwright-io/build/git:latest", "command": ["/ko-app/git"], "env": [{"name": "HOME", "value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser": 1000,"runAsGroup": 1000}} . The following properties are ignored as they are set by the controller: args , name . |
GIT_CONTAINER_IMAGE | Custom container image for Git clone steps. If GIT_CONTAINER_TEMPLATE is also specifying an image, then the value for GIT_CONTAINER_IMAGE has precedence. |
BUNDLE_IMAGE_CONTAINER_TEMPLATE | JSON representation of a Container template that is used for steps that pulls a bundle image to obtain the packaged source code. Default is {"image": "ghcr.io/shipwright-io/build/bundle:latest", "command": ["/ko-app/bundle"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser":1000,"runAsGroup":1000}} . The following properties are ignored as they are set by the controller: args , name . |
BUNDLE_IMAGE_CONTAINER_IMAGE | Custom container image that pulls a bundle image to obtain the packaged source code. If BUNDLE_IMAGE_CONTAINER_TEMPLATE is also specifying an image, then the value for BUNDLE_IMAGE_CONTAINER_IMAGE has precedence. |
IMAGE_PROCESSING_CONTAINER_TEMPLATE | JSON representation of a Container template that is used for steps that processes the image. Default is {"image": "ghcr.io/shipwright-io/build/image-processing:latest", "command": ["/ko-app/image-processing"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext": {"allowPrivilegeEscalation": false, "capabilities": {"add": ["DAC_OVERRIDE"], "drop": ["ALL"]}, "runAsUser": 0, "runAsgGroup": 0}} . The following properties are ignored as they are set by the controller: args , name . |
IMAGE_PROCESSING_CONTAINER_IMAGE | Custom container image that is used for steps that processes the image. If IMAGE_PROCESSING_CONTAINER_TEMPLATE is also specifying an image, then the value for IMAGE_PROCESSING_CONTAINER_IMAGE has precedence. |
WAITER_IMAGE_CONTAINER_TEMPLATE | JSON representation of a Container template that waits for local source code to be uploaded to it. Default is {"image":"ghcr.io/shipwright-io/build/waiter:latest", "command": ["/ko-app/waiter"], "args": ["start"], "env": [{"name": "HOME","value": "/shared-home"}], "securityContext":{"allowPrivilegeEscalation": false, "capabilities": {"drop": ["ALL"]}, "runAsUser":1000,"runAsGroup":1000}} . The following properties are ignored as they are set by the controller: args , name . |
WAITER_IMAGE_CONTAINER_IMAGE | Custom container image that waits for local source code to be uploaded to it. If WAITER_IMAGE_CONTAINER_TEMPLATE is also specifying an image, then the value for WAITER_IMAGE_CONTAINER_IMAGE has precedence. |
BUILD_CONTROLLER_LEADER_ELECTION_NAMESPACE | Set the namespace to be used to store the shipwright-build-controller lock, by default it is in the same namespace as the controller itself. |
BUILD_CONTROLLER_LEASE_DURATION | Override the LeaseDuration , which is the duration that non-leader candidates will wait to force acquire leadership. |
BUILD_CONTROLLER_RENEW_DEADLINE | Override the RenewDeadline , which is the duration that the acting leader will retry refreshing leadership before giving up. |
BUILD_CONTROLLER_RETRY_PERIOD | Override the RetryPeriod , which is the duration the LeaderElector clients should wait between tries of actions. |
BUILD_MAX_CONCURRENT_RECONCILES | The number of concurrent reconciles by the build controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0. |
BUILDRUN_MAX_CONCURRENT_RECONCILES | The number of concurrent reconciles by the BuildRun controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0. |
BUILDSTRATEGY_MAX_CONCURRENT_RECONCILES | The number of concurrent reconciles by the BuildStrategy controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0. |
CLUSTERBUILDSTRATEGY_MAX_CONCURRENT_RECONCILES | The number of concurrent reconciles by the ClusterBuildStrategy controller. A value of 0 or lower will use the default from the controller-runtime controller Options. Default is 0. |
KUBE_API_BURST | Burst to use for the Kubernetes API client. See Config.Burst. A value of 0 or lower will use the default from client-go, which currently is 10. Default is 0. |
KUBE_API_QPS | QPS to use for the Kubernetes API client. See Config.QPS. A value of 0 or lower will use the default from client-go, which currently is 5. Default is 0. |
Role-based Access Control
The release deployment YAML file includes two cluster-wide roles for using Shipwright Build objects.
The following roles are installed:
shpwright-build-aggregate-view
: this role grants read access (get, list, watch) to most Shipwright Build objects.
This includes BuildStrategy
, ClusterBuildStrategy
, Build
, and BuildRun
objects.
This role is aggregated to the Kubernetes “view” role.shipwright-build-aggregate-edit
: this role grants write access (create, update, patch, delete) to Shipwright objects that are namespace-scoped.
This includes BuildStrategy
, Builds
, and BuildRuns
.
Read access is granted to all ClusterBuildStrategy
objects.
This role is aggregated to the Kubernetes “edit” and “admin” roles.
Only cluster administrators are granted write access to ClusterBuildStrategy
objects.
This can be changed by creating a separate Kubernetes ClusterRole
with these permissions and binding the role to appropriate users.
2.6 - Build Controller Metrics
The Build component exposes several metrics to help you monitor the health and behavior of your build resources.
Following build metrics are exposed on port 8383
.
Name | Type | Description | Labels | Status |
---|
build_builds_registered_total | Counter | Number of total registered Builds. | buildstrategy=<build_buildstrategy_name> 1 namespace=<buildrun_namespace> 1 build=<build_name> 1 | experimental |
build_buildruns_completed_total | Counter | Number of total completed BuildRuns. | buildstrategy=<build_buildstrategy_name> 1 namespace=<buildrun_namespace> 1 build=<build_name> 1 buildrun=<buildrun_name> 1 | experimental |
build_buildrun_establish_duration_seconds | Histogram | BuildRun establish duration in seconds. | buildstrategy=<build_buildstrategy_name> 1 namespace=<buildrun_namespace> 1 build=<build_name> 1 buildrun=<buildrun_name> 1 | experimental |
build_buildrun_completion_duration_seconds | Histogram | BuildRun completion duration in seconds. | buildstrategy=<build_buildstrategy_name> 1 namespace=<buildrun_namespace> 1 build=<build_name> 1 buildrun=<buildrun_name> 1 | experimental |
build_buildrun_rampup_duration_seconds | Histogram | BuildRun ramp-up duration in seconds | buildstrategy=<build_buildstrategy_name> 1 namespace=<buildrun_namespace> 1 build=<build_name> 1 buildrun=<buildrun_name> 1 | experimental |
build_buildrun_taskrun_rampup_duration_seconds | Histogram | BuildRun taskrun ramp-up duration in seconds. | buildstrategy=<build_buildstrategy_name> 1 namespace=<buildrun_namespace> 1 build=<build_name> 1 buildrun=<buildrun_name> 1 | experimental |
build_buildrun_taskrun_pod_rampup_duration_seconds | Histogram | BuildRun taskrun pod ramp-up duration in seconds. | buildstrategy=<build_buildstrategy_name> 1 namespace=<buildrun_namespace> 1 build=<build_name> 1 buildrun=<buildrun_name> 1 | experimental |
1 Labels for metric are disabled by default. See Configuration of metric labels to enable them.
Configuration of histogram buckets
Environment variables can be set to use custom buckets for the histogram metrics:
Metric | Environment variable | Default |
---|
build_buildrun_establish_duration_seconds | PROMETHEUS_BR_EST_DUR_BUCKETS | 0,1,2,3,5,7,10,15,20,30 |
build_buildrun_completion_duration_seconds | PROMETHEUS_BR_COMP_DUR_BUCKETS | 50,100,150,200,250,300,350,400,450,500 |
build_buildrun_rampup_duration_seconds | PROMETHEUS_BR_RAMPUP_DUR_BUCKETS | 0,1,2,3,4,5,6,7,8,9,10 |
build_buildrun_taskrun_rampup_duration_seconds | PROMETHEUS_BR_RAMPUP_DUR_BUCKETS | 0,1,2,3,4,5,6,7,8,9,10 |
build_buildrun_taskrun_pod_rampup_duration_seconds | PROMETHEUS_BR_RAMPUP_DUR_BUCKETS | 0,1,2,3,4,5,6,7,8,9,10 |
The values have to be a comma-separated list of numbers. You need to set the environment variable for the build controller for your customization to become active. When running locally, set the variable right before starting the controller:
export PROMETHEUS_BR_COMP_DUR_BUCKETS=30,60,90,120,180,240,300,360,420,480
make local
When you deploy the build controller in a Kubernetes cluster, you need to extend the spec.containers[0].spec.env
section of the sample deployment file, controller.yaml. Add an additional entry:
[...]
env:
- name: PROMETHEUS_BR_COMP_DUR_BUCKETS
value: "30,60,90,120,180,240,300,360,420,480"
[...]
Configuration of metric labels
As the amount of buckets and labels has a direct impact on the number of Prometheus time series, you can selectively enable labels that you are interested in using the PROMETHEUS_ENABLED_LABELS
environment variable. The supported labels are:
- buildstrategy
- namespace
- build
- buildrun
Use a comma-separated value to enable multiple labels. For example:
export PROMETHEUS_ENABLED_LABELS=namespace
make local
or
export PROMETHEUS_ENABLED_LABELS=buildstrategy,namespace,build
make local
When you deploy the build controller in a Kubernetes cluster, you need to extend the spec.containers[0].spec.env
section of the sample deployment file, controller.yaml. Add an additional entry:
[...]
env:
- name: PROMETHEUS_ENABLED_LABELS
value: namespace
[...]
2.7 - Build Controller Profiling
The build controller supports a pprof
profiling mode, which is omitted from the binary by default. To use the profiling, use the controller image that was built with pprof
enabled.
Enable pprof
in the build controller
In the Kubernetes cluster, edit the shipwright-build-controller
deployment to use the container tag with the debug
suffix.
kubectl --namespace <namespace> set image \
deployment/shipwright-build-controller \
shipwright-build-controller="$(kubectl --namespace <namespace> get deployment shipwright-build-controller --output jsonpath='{.spec.template.spec.containers[].image}')-debug"
Connect go pprof
to build controller
Depending on the respective setup, there could be multiple build controller pods for high availability reasons. In this case, you have to look-up the current leader first. The following command can be used to verify the currently active leader:
kubectl --namespace <namespace> get configmap shipwright-build-controller-lock --output json \
| jq --raw-output '.metadata.annotations["control-plane.alpha.kubernetes.io/leader"]' \
| jq --raw-output .holderIdentity
The pprof
endpoint is not exposed in the cluster and can only be used from inside the container. Therefore, set-up port-forwarding to make the pprof
port available locally.
kubectl --namespace <namespace> port-forward <controller-pod-name> 8383:8383
Now, you can setup a local webserver to browse through the profiling data.
go tool pprof -http localhost:8080 http://localhost:8383/debug/pprof/heap
Please note: For it to work, you have to have graphviz
installed on your system, for example using brew install graphviz
, apt-get install graphviz
, yum install graphviz
, or similar.
3 -
Contributing Guidelines
Welcome to Shipwright, we are glad you want to contribute to the project!
This document contains general guidelines for submitting contributions.
Each component of Shipwright will have its own specific guidelines.
Contributing prerequisites (CLA/DCO)
The project enforces Developer Certificate of Origin (DCO).
By submitting pull requests submitters acknowledge they grant the
Apache License v2 to the code and that they are eligible to grant this license for all commits submitted in their pull requests.
Getting Started
All contributors must abide by our Code of Conduct.
The core code for Shipwright is located in the following repositories:
- build - the Build APIs and associated controller to run builds.
- cli - the
shp
command line for Shipwright builds - operator - an operator to install Shipwright components on Kubernetes via OLM.
Technical documentation is spread across the code repositories, and is consolidated in the website repository.
Content in website
is published to shipwright.io
Creating new Issues
We recommend to open an issue for the following scenarios:
- Asking for help or questions. (Use the discussion or help_wanted label)
- Reporting a bug. (Use the kind/bug label)
- Requesting a new feature. (Use the kind/feature label)
Use the following checklist to determine where you should create an issue:
- If the issue is related to how a Build or BuildRun behaves, or related to Build strategies, create an issue in build.
- If the issue is related to the command line, create an issue in cli.
- If the issue is related to how the operator installs Shipwright on a cluster, create an issue in operator.
- If the issue is related to the shipwright.io website, create an issue in website.
If you are not sure, create an issue in this repository, and the Shipwright maintainers will route it to the correct location.
If feature request is sufficiently broad or significant, the community may ask you to submit a SHIP enhancement proposal.
Please refer to the SHIP guidelines to learn how to submit a SHIP proposal.
Writing Pull Requests
Contributions can be submitted by creating a pull request on Github.
We recommend you do the following to ensure the maintainers can collaborate on your contribution:
- Fork the project into your personal Github account
- Create a new feature branch for your contribution
- Make your changes
- If you make code changes, ensure tests are passing
- Open a PR with a clear description, completing the pull request template if one is provided
Please reference the appropriate GitHub issue if your pull request provides a fix.
NOTE: All commits must be signed-off (Developer Certificate of Origin (DCO)) so make sure you use the -s
flag when you commit. See more information on signing in here.
Code review process
Once your pull request is submitted, a Shipwright maintainer should be assigned to review your changes.
The code review should cover:
- Ensure all related tests (unit, integration and e2e) are passing.
- Ensure the code style is compliant with the coding conventions
- Ensure the code is properly documented, e.g. enough comments where needed.
- Ensure the code is adding the necessary test cases (unit, integration or e2e) if needed.
Contributors are expected to respond to feedback from reviewers in a constructive manner.
Reviewers are expected to respond to new submissions in a timely fashion, with clear language if changes are requested.
Once the pull request is approved and marked “lgtm”, it will get merged.
We run the community meetings every Monday at 13:00 UTC time.
For each upcoming meeting we generate a new issue where we layout the topics to discuss.
See our previous meetings outcomes.
Please request an invite in our Slack channel or join the shipwright-dev mailing list.
All meetings are also published on our public calendar.