There are two types of strategies, the ClusterBuildStrategy
(clusterbuildstrategies.shipwright.io/v1beta1
) and the BuildStrategy
(buildstrategies.shipwright.io/v1beta1
). Both strategies define a shared group of steps, needed to fullfil the application build.
A ClusterBuildStrategy
is available cluster-wide, while a BuildStrategy
is available within a namespace.
Well-known strategies can be bootstrapped from here. The currently supported Cluster BuildStrategy are:
Name | Supported platforms |
---|---|
buildah | all |
BuildKit | all |
buildpacks-v3-heroku | linux/amd64 only |
buildpacks-v3 | linux/amd64 only |
kaniko | all |
ko | all |
source-to-image | linux/amd64 only |
The current supported namespaces BuildStrategy are:
Name | Supported platforms |
---|---|
buildpacks-v3-heroku | linux/amd64 only |
buildpacks-v3 | linux/amd64 only |
The buildah
ClusterBuildStrategy uses buildah
to build and push a container image, out of a Dockerfile
. The Dockerfile
should be specified on the Build
resource.
The strategy is available in two formats:
Learn more about the differences of shipwright-, or strategy-managed push
To install use:
kubectl apply -f samples/v1beta1/buildstrategy/buildah/buildstrategy_buildah_shipwright_managed_push_cr.yaml
kubectl apply -f samples/v1beta1/buildstrategy/buildah/buildstrategy_buildah_strategy_managed_push_cr.yaml
The buildpacks-v3 BuildStrategy/ClusterBuildStrategy uses a Cloud Native Builder (CNB) container image, and is able to implement lifecycle commands.
You can install the BuildStrategy
in your namespace or install the ClusterBuildStrategy
at cluster scope so that it can be shared across namespaces.
To install the cluster scope strategy, you can chose between the Paketo and Heroku buildpacks family:
# Paketo
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_cr.yaml
# Heroku
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_cr.yaml
To install the namespaced scope strategy, you can chose between the Paketo and Heroku buildpacks family:
# Paketo
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3_namespaced_cr.yaml
# Heroku
kubectl apply -f samples/v1beta1/buildstrategy/buildpacks-v3/buildstrategy_buildpacks-v3-heroku_namespaced_cr.yaml
The kaniko
ClusterBuildStrategy is composed by Kaniko’s executor
kaniko, with the objective of building a container-image, out of a Dockerfile
and context directory. The kaniko-trivy
ClusterBuildStrategy adds trivy scanning and refuses to push images with critical vulnerabilities.
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/kaniko/buildstrategy_kaniko_cr.yaml
You can also incorporate scanning into the ClusterBuildStrategy. The kaniko-trivy
ClusterBuildStrategy builds the image with kaniko
, then scans with trivy. The BuildRun will then exit with an error if there is a critical vulnerability, instead of pushing the vulnerable image into the container registry.
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/kaniko/buildstrategy_kaniko-trivy_cr.yaml
Note: doing image scanning is not a substitute for trusting the Dockerfile you are building. The build process itself is also susceptible if the Dockerfile has a vulnerability. Frameworks/strategies such as build-packs or source-to-image (which avoid directly building a Dockerfile) should be considered if you need guardrails around the code you want to build.
BuildKit is composed of the buildctl
client and the buildkitd
daemon. For the buildkit
ClusterBuildStrategy, it runs on a daemonless mode, where both client and ephemeral daemon run in a single container. In addition, it runs without privileges (rootless).
By default, the buildkit
ClusterBuildStrategy will use caching to optimize the build times. When pushing an image to a registry, it will use the inline export cache, which appends cache information to the image that is built. Please refer to export-cache docs for more information. Caching can be disabled by setting the cache
parameter to "disabled"
. See Defining ParamValues for more information.
The sample build strategy contains array parameters to set values for ARG
s in your Dockerfile, and for mounts with type=secret. The parameter names are build-args
and secrets
. Defining ParamValues contains example usage.
The sample build strategy contains a platforms
array parameter that you can set to leverage BuildKit’s support to build multi-platform images. If you do not set this value, the image is built for the platform that is supported by the FROM
image. If that image supports multiple platforms, then the image will be built for the platform of your Kubernetes node.
The buildkit
ClusterBuildStrategy currently locks the following parameters:
unconfined
profile.The BuildKit strategy contains fields with regards to security settings. It therefore depends on the respective cluster setup and administrative configuration. These settings are:
unconfined
profile for both AppArmor and seccomp as required by the underlying rootlesskit
.allowPrivilegeEscalation
settings is set to true
to be able to use binaries that have the setuid
bit set in order to run with “root” level privileges. In case of BuildKit, this is required by rootlesskit
in order to set the user namespace mapping file /proc/<pid>/uid_map
.runAsUser
.These settings have no effect in case Pod Security Standards are not used.
Please note: At this point in time, there is no way to run rootlesskit
to start the BuildKit daemon without the allowPrivilegeEscalation
flag set to true
. Clusters with the Restricted
security standard in place will not be able to use this build strategy.
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/buildkit/buildstrategy_buildkit_cr.yaml
The ko
ClusterBuilderStrategy is using ko’s publish command to build an image from a Golang main package.
To install the cluster scope strategy, use:
kubectl apply -f samples/v1beta1/buildstrategy/ko/buildstrategy_ko_cr.yaml
The build strategy provides the following parameters that you can set in a Build or BuildRun to control its behavior:
Parameter | Description | Default |
---|---|---|
go-flags | Value for the GOFLAGS environment variable. | Empty |
go-version | Version of Go, must match a tag from the golang image | 1.21 |
ko-version | Version of ko, must be either latest for the newest release, or a ko release name | latest |
package-directory | The directory inside the context directory containing the main package. | . |
target-platform | Target platform to be built. For example: linux/arm64 . Multiple platforms can be provided separated by comma, for example: linux/arm64,linux/amd64 . The value all will build all platforms supported by the base image. The value current will build the platform on which the build runs. | current |
Volume | Description |
---|---|
gocache | Volume to contain the GOCACHE. Can be set to a persistent volume to optimize compilation performance for rebuilds. The default is an emptyDir volume which means that the cached data is discarded at the end of a BuildRun. |
This BuildStrategy is composed by source-to-image
and kaniko
in order to generate a Dockerfile
and prepare the application to be built later on with a builder.
s2i
requires a specially crafted image, which can be informed as builderImage
parameter on the Build
resource.
To install the cluster scope strategy use:
kubectl apply -f samples/v1beta1/buildstrategy/source-to-image/buildstrategy_source-to-image_cr.yaml
s2i
in order to generate a Dockerfile
and prepare source-code for image build;kaniko
to create and push the container image to what is defined as output.image
;Strategy parameters allow users to parameterize their strategy definition, by allowing users to control the parameters values via the Build
or BuildRun
resources.
Users defining parameters under their strategies require to understand the following:
Definition: A list of parameters should be defined under spec.parameters
. Each list item should consist of a name, a description, a type (either "array"
or "string"
) and optionally a default value (for type=string), or defaults values (for type=array). If no default(s) are provided, then the user must define a value in the Build or BuildRun.
Usage: In order to use a parameter in the strategy steps, use the following syntax for type=string: $(params.your-parameter-name)
. String parameters can be used in all places in the buildSteps
. Some example scenarios are:
image
: to use a custom tag, for example golang:$(params.go-version)
as it is done in the ko sample build strategy)args
: to pass data into your builder commandenv
: to force a user to provide a value for an environment variable.Arrays are referenced using $(params.your-array-parameter-name[*])
, and can only be used in as the value for args
or command
because the defined as arrays by Kubernetes. For every item in the array, an arg will be set. For example, if you specify this in your build strategy step:
spec:
parameters:
- name: tool-args
description: Parameters for the tool
type: array
steps:
- name: a-step
command:
- some-tool
args:
- $(params.tool-args[*])
If the build user sets the value of tool-args to ["–some-arg", “some-value”], then the Pod will contain these args:
spec:
containers:
- name: a-step
args:
...
- --some-arg
- some-value
Parameterize: Any Build
or BuildRun
referencing your strategy, can set a value for your-parameter-name parameter if needed.
Note: Users can provide parameter values as simple strings or as references to keys in ConfigMaps and Secrets. If they use a ConfigMap or Secret, then the value can only be used if the parameter is used in the command
, args
, or env
section of the buildSteps
. For example, the above mentioned scenario to set a step’s image
to golang:$(params.go-version)
does not allow the usage of ConfigMaps or Secrets.
The following example is from the BuildKit sample build strategy. It defines and uses several parameters:
---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
name: buildkit
...
spec:
parameters:
- name: build-args
description: "The values for the ARGs in the Dockerfile. Values must be in the format KEY=VALUE."
type: array
defaults: []
- name: cache
description: "Configure BuildKit's cache usage. Allowed values are 'disabled' and 'registry'. The default is 'registry'."
type: string
default: registry
- name: insecure-registry
type: string
description: "enables the push to an insecure registry"
default: "false"
- name: secrets
description: "The secrets to pass to the build. Values must be in the format ID=FILE_CONTENT."
type: array
defaults: []
- name: dockerfile
description: The path to the Dockerfile to be used for building the image.
type: string
default: "Dockerfile"
steps:
...
- name: build-and-push
image: moby/buildkit:nightly-rootless
imagePullPolicy: Always
workingDir: $(params.shp-source-root)
...
command:
- /bin/ash
args:
- -c
- |
set -euo pipefail
# Prepare the file arguments
DOCKERFILE_PATH='$(params.shp-source-context)/$(params.dockerfile)'
DOCKERFILE_DIR="$(dirname "${DOCKERFILE_PATH}")"
DOCKERFILE_NAME="$(basename "${DOCKERFILE_PATH}")"
# We only have ash here and therefore no bash arrays to help add dynamic arguments (the build-args) to the build command.
echo "#!/bin/ash" > /tmp/run.sh
echo "set -euo pipefail" >> /tmp/run.sh
echo "buildctl-daemonless.sh \\" >> /tmp/run.sh
echo "build \\" >> /tmp/run.sh
echo "--progress=plain \\" >> /tmp/run.sh
echo "--frontend=dockerfile.v0 \\" >> /tmp/run.sh
echo "--opt=filename=\"${DOCKERFILE_NAME}\" \\" >> /tmp/run.sh
echo "--local=context='$(params.shp-source-context)' \\" >> /tmp/run.sh
echo "--local=dockerfile=\"${DOCKERFILE_DIR}\" \\" >> /tmp/run.sh
echo "--output=type=image,name='$(params.shp-output-image)',push=true,registry.insecure=$(params.insecure-registry) \\" >> /tmp/run.sh
if [ "$(params.cache)" == "registry" ]; then
echo "--export-cache=type=inline \\" >> /tmp/run.sh
echo "--import-cache=type=registry,ref='$(params.shp-output-image)' \\" >> /tmp/run.sh
elif [ "$(params.cache)" == "disabled" ]; then
echo "--no-cache \\" >> /tmp/run.sh
else
echo -e "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'."
echo -n "InvalidParameterValue" > '$(results.shp-error-reason.path)'
echo -n "An invalid value for the parameter 'cache' has been provided: '$(params.cache)'. Allowed values are 'disabled' and 'registry'." > '$(results.shp-error-message.path)'
exit 1
fi
stage=""
for a in "$@"
do
if [ "${a}" == "--build-args" ]; then
stage=build-args
elif [ "${a}" == "--secrets" ]; then
stage=secrets
elif [ "${stage}" == "build-args" ]; then
echo "--opt=\"build-arg:${a}\" \\" >> /tmp/run.sh
elif [ "${stage}" == "secrets" ]; then
# Split ID=FILE_CONTENT into variables id and data
# using head because the data could be multiline
id="$(echo "${a}" | head -1 | sed 's/=.*//')"
# This is hacky, we remove the suffix ${id}= from all lines of the data.
# If the data would be multiple lines and a line would start with ${id}=
# then we would remove it. We could force users to give us the secret
# base64 encoded. But ultimately, the best solution might be if the user
# mounts the secret and just gives us the path here.
data="$(echo "${a}" | sed "s/^${id}=//")"
# Write the secret data into a temporary file, once we have volume support
# in the build strategy, we should use a memory based emptyDir for this.
echo -n "${data}" > "/tmp/secret_${id}"
# Add the secret argument
echo "--secret id=${id},src="/tmp/secret_${id}" \\" >> /tmp/run.sh
fi
done
echo "--metadata-file /tmp/image-metadata.json" >> /tmp/run.sh
chmod +x /tmp/run.sh
/tmp/run.sh
# Store the image digest
sed -E 's/.*containerimage.digest":"([^"]*).*/\1/' < /tmp/image-metadata.json > '$(results.shp-image-digest.path)'
# That's the separator between the shell script and its args
- --
- --build-args
- $(params.build-args[*])
- --secrets
- $(params.secrets[*])
See more information on how to use these parameters in a Build
or BuildRun
in the related documentation.
Contrary to the strategy spec.parameters
, you can use system parameters and their values defined at runtime when defining the steps of a build strategy to access system information as well as information provided by the user in their Build or BuildRun. The following parameters are available:
Parameter | Description |
---|---|
$(params.shp-source-root) | The absolute path to the directory that contains the user’s sources. |
$(params.shp-source-context) | The absolute path to the context directory of the user’s sources. If the user specified no value for spec.source.contextDir in their Build , then this value will equal the value for $(params.shp-source-root) . Note that this directory is not guaranteed to exist at the time the container for your step is started, you can therefore not use this parameter as a step’s working directory. |
$(params.shp-output-directory) | The absolute path to a directory that the build strategy should store the image in. You can store a single tarball containing a single image, or an OCI image layout. |
$(params.shp-output-image) | The URL of the image that the user wants to push, as specified in the Build’s spec.output.image or as an override from the BuildRun’s spec.output.image . |
$(params.shp-output-insecure) | A flag that indicates the output image’s registry location is insecure because it uses a certificate not signed by a certificate authority, or uses HTTP. |
As a build strategy author, you decide whether your build strategy or Shipwright pushes the build image to the container registry:
$(params.shp-output-directory)
, then Shipwright assumes that your build strategy PUSHES the image. We call this a strategy-managed push.$(params.shp-output-directory)
, then Shipwright assumes that your build strategy does NOT PUSH the image. We call this a shipwright-managed push.When you use the $(params.shp-output-directory)
parameter, then Shipwright will also set the image-related system results.
If you are uncertain about how to implement your build strategy, then follow this guidance:
$(params.shp-output-insecure)
parameter.ko
. Such base image layers are often already present in the destination registry (like in rebuilds). If the strategy can perform the push operation, then it can optimize the process and can omit the download of the base image when it is not required to push it. In the case of a shipwright-managed push, the complete image must be locally stored in $(params.shp-output-directory)
, which implies that a base image must always be downloaded.Parameter Type | User Configurable | Definition |
---|---|---|
System Parameter | No | At run-time, by the BuildRun controller. |
Strategy Parameter | Yes | At build-time, during the BuildStrategy creation. |
In build strategy steps, string parameters are referenced using $(params.PARAM_NAME)
. This applies to system parameters, and those parameters defined in the build strategy. You can reference those parameters at many locations in the build steps, such as environment variables values, arguments, image, and more. In the Pod, all $(params.PARAM_NAME)
tokens will be replaced by simple string replaces. This is safe in most locations but requires your attention when you define an inline script using an argument. For example:
spec:
parameters:
- name: sample-parameter
description: A sample parameter
type: string
steps:
- name: sample-step
command:
- /bin/bash
args:
- -c
- |
set -euo pipefail
some-tool --sample-argument "$(params.sample-parameter)"
This opens the door to script injection, for example if the user sets the sample-parameter
to argument-value" && malicious-command && echo "
, the resulting pod argument will look like this:
- |
set -euo pipefail
some-tool --sample-argument "argument-value" && malicious-command && echo ""
To securely pass a parameter value into a script-style argument, you can chose between these two approaches:
Using environment variables. This is used in some of our sample strategies, for example ko, or buildpacks. Basically, instead of directly using the parameter inside the script, you pass it via environment variable. Using quoting, shells ensure that no command injection is possible:
spec:
parameters:
- name: sample-parameter
description: A sample parameter
type: string
steps:
- name: sample-step
env:
- name: PARAM_SAMPLE_PARAMETER
value: $(params.sample-parameter)
command:
- /bin/bash
args:
- -c
- |
set -euo pipefail
some-tool --sample-argument "${PARAM_SAMPLE_PARAMETER}"
Using arguments. This is used in some of our sample build strategies, for example buildah. Here, you use arguments to your own inline script. Appropriate shell quoting guards against command injection.
spec:
parameters:
- name: sample-parameter
description: A sample parameter
type: string
steps:
- name: sample-step
command:
- /bin/bash
args:
- -c
- |
set -euo pipefail
SAMPLE_PARAMETER="$1"
some-tool --sample-argument "${SAMPLE_PARAMETER}"
- --
- $(params.sample-parameter)
If you are using a strategy-managed push, see output directory vs output image, you can optionally store the size and digest of the image your build strategy created to a set of files.
Result file | Description |
---|---|
$(results.shp-image-digest.path) | File to store the digest of the image. |
$(results.shp-image-size.path) | File to store the compressed size of the image. |
You can look at sample build strategies, such as Buildpacks, to see how they fill some or all of the results files.
This information will be available in the .status.output
section of the BuildRun.
apiVersion: shipwright.io/v1beta1
kind: BuildRun
# [...]
status:
# [...]
output:
digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53
size: 1989004
# [...]
Additionally, you can store error details for debugging purposes when a BuildRun fails using your strategy.
Result file | Description |
---|---|
$(results.shp-error-reason.path) | File to store the error reason. |
$(results.shp-error-message.path) | File to store the error message. |
Reason is intended to be a one-word CamelCase classification of the error source, with the first letter capitalized.
Error details are only propagated if the build container terminates with a non-zero exit code.
This information will be available in the .status.failureDetails
section of the BuildRun.
apiVersion: shipwright.io/v1beta1
kind: BuildRun
# [...]
status:
# [...]
failureDetails:
location:
container: step-source-default
pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr
message: The source repository does not exist, or you have insufficient permission
to access it.
reason: GitRemotePrivate
In a build strategy, it is recommended that you define a securityContext
with a runAsUser and runAsGroup:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
This runAs configuration will be used for all shipwright-managed steps such as the step that retrieves the source code, and for the steps you define in the build strategy. This configuration ensures that all steps share the same runAs configuration which eliminates file permission problems.
Without a securityContext
for the build strategy, shipwright-managed steps will run with the runAsUser
and runAsGroup
that is defined in the configuration’s container templates that is potentially a different user than you use in your build strategy. This can result in issues when for example source code is downloaded as user A as defined by the Git container template, but your strategy accesses it as user B.
In build strategy steps you can define a step-specific securityContext
that matches Kubernetes’ security context where you can configure other security aspects such as capabilities or privileged containers.
All strategies steps can include a definition of resources(limits and requests) for CPU, memory and disk. For strategies with more than one step, each step(container) could require more resources than others. Strategy admins are free to define the values that they consider the best fit for each step. Also, identical strategies with the same steps that are only different in their name and step resources can be installed on the cluster to allow users to create a build with smaller and larger resource requirements.
If the strategy admins would require to have multiple flavours of the same strategy, where one strategy has more resources that the other. Then, multiple strategies for the same type should be defined on the cluster. In the following example, we use Kaniko as the type:
---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
name: kaniko-small
spec:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v1.21.1
workingDir: $(params.shp-source-root)
securityContext:
runAsUser: 0
capabilities:
add:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
- SETFCAP
- KILL
env:
- name: DOCKER_CONFIG
value: /tekton/home/.docker
- name: AWS_ACCESS_KEY_ID
value: NOT_SET
- name: AWS_SECRET_KEY
value: NOT_SET
command:
- /kaniko/executor
args:
- --skip-tls-verify=true
- --dockerfile=$(params.dockerfile)
- --context=$(params.shp-source-context)
- --destination=$(params.shp-output-image)
- --snapshot-mode=redo
- --push-retry=3
resources:
limits:
cpu: 250m
memory: 65Mi
requests:
cpu: 250m
memory: 65Mi
parameters:
- name: dockerfile
description: The path to the Dockerfile to be used for building the image.
type: string
default: "Dockerfile"
---
apiVersion: shipwright.io/v1beta1
kind: ClusterBuildStrategy
metadata:
name: kaniko-medium
spec:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor:v1.21.1
workingDir: $(params.shp-source-root)
securityContext:
runAsUser: 0
capabilities:
add:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
- SETFCAP
- KILL
env:
- name: DOCKER_CONFIG
value: /tekton/home/.docker
- name: AWS_ACCESS_KEY_ID
value: NOT_SET
- name: AWS_SECRET_KEY
value: NOT_SET
command:
- /kaniko/executor
args:
- --skip-tls-verify=true
- --dockerfile=$(params.dockerfile)
- --context=$(params.shp-source-context)
- --destination=$(params.shp-output-image)
- --snapshot-mode=redo
- --push-retry=3
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
parameters:
- name: dockerfile
description: The path to the Dockerfile to be used for building the image.
type: string
default: "Dockerfile"
The above provides more control and flexibility for the strategy admins. For end-users
, all they need to do, is to reference the proper strategy. For example:
---
apiVersion: shipwright.io/v1beta1
kind: Build
metadata:
name: kaniko-medium
spec:
source:
git:
url: https://github.com/shipwright-io/sample-go
contextDir: docker-build
strategy:
name: kaniko
kind: ClusterBuildStrategy
paramValues:
- name: dockerfile
value: Dockerfile
The Build controller relies on the Tekton pipeline controller to schedule the pods
that execute the above strategy steps. In a nutshell, the Build controller creates on run-time a Tekton TaskRun, and the TaskRun generates a new pod in the particular namespace. In order to build an image, the pod executes all the strategy steps one-by-one.
Tekton manage each step resources request in a very particular way, see the docs. From this document, it mentions the following:
The CPU, memory, and ephemeral storage resource requests will be set to zero, or, if specified, the minimums set through LimitRanges in that Namespace, if the container image does not have the largest resource request out of all container images in the Task. This ensures that the Pod that executes the Task only requests enough resources to run a single container image in the Task rather than hoard resources for all container images in the Task at once.
For a more concrete example, letĀ“s take a look on the following scenarios:
Scenario 1. Namespace without LimitRange
, both steps with the same resource values.
If we will apply the following resources:
We will see some differences between the TaskRun
definition and the pod
definition.
For the TaskRun
, as expected we can see the resources on each step
, as we previously define on our strategy.
$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "65Mi"
}
}
$ kubectl -n test-build get tr buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "65Mi"
}
}
The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:
$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"ephemeral-storage": "0",
"memory": "65Mi"
}
}
$ kubectl -n test-build get pods buildah-golang-buildrun-9gmcx-pod-lhzbc -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "0", <------------------- See how the request is set to ZERO.
"ephemeral-storage": "0", <------------------- See how the request is set to ZERO.
"memory": "0" <------------------- See how the request is set to ZERO.
}
}
In this scenario, only one container can have the spec.resources.requests
definition. Even when both steps have the same values, only one container will get them, the others will be set to zero.
Scenario 2. Namespace without LimitRange
, steps with different resources:
If we will apply the following resources:
We will use a modified buildah strategy, with the following steps resources:
- name: buildah-bud
image: quay.io/containers/buildah:v1.34.0
workingDir: $(params.shp-source-root)
securityContext:
privileged: true
command:
- /usr/bin/buildah
args:
- bud
- --tag=$(params.shp-output-image)
- --file=$(params.dockerfile)
- $(build.source.contextDir)
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 65Mi
volumeMounts:
- name: buildah-images
mountPath: /var/lib/containers/storage
- name: buildah-push
image: quay.io/containers/buildah:v1.34.0
securityContext:
privileged: true
command:
- /usr/bin/buildah
args:
- push
- --tls-verify=false
- docker://$(params.shp-output-image)
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 250m
memory: 100Mi <------ See how we provide more memory to step-buildah-push, compared to the 65Mi of the other step
For the TaskRun
, as expected we can see the resources on each step
.
$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "65Mi"
}
}
$ kubectl -n test-build get tr buildah-golang-buildrun-skgrp -o json | jq '.spec.taskSpec.steps[] | select(.name == "step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m",
"memory": "100Mi"
}
}
The pod definition is different, while Tekton will only use the highest values of one container, and set the rest(lowest) to zero:
$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-bud" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "250m", <------------------- See how the CPU is preserved
"ephemeral-storage": "0",
"memory": "0" <------------------- See how the memory is set to ZERO
}
}
$ kubectl -n test-build get pods buildah-golang-buildrun-95xq8-pod-mww8d -o json | jq '.spec.containers[] | select(.name == "step-step-buildah-push" ) | .resources'
{
"limits": {
"cpu": "500m",
"memory": "1Gi"
},
"requests": {
"cpu": "0", <------------------- See how the CPU is set to zero.
"ephemeral-storage": "0",
"memory": "100Mi" <------------------- See how the memory is preserved on this container
}
}
In the above scenario, we can see how the maximum numbers for resource requests are distributed between containers. The container step-buildah-push
gets the 100mi
for the memory requests, while it was the one defining the highest number. At the same time, the container step-buildah-bud
is assigned a 0
for its memory request.
Scenario 3. Namespace with a LimitRange
.
When a LimitRange
exists on the namespace, Tekton Pipeline
controller will do the same approach as stated in the above two scenarios. The difference is that for the containers that have lower values, instead of zero, they will get the minimum values of the LimitRange
.
Annotations can be defined for a BuildStrategy/ClusterBuildStrategy as for any other Kubernetes object. Annotations are propagated to the TaskRun and from there, Tekton propagates them to the Pod. Use cases for this are for example:
kubernetes.io/ingress-bandwidth
and kubernetes.io/egress-bandwidth
annotations to limit the network bandwidth the Pod
is allowed to use.container.apparmor.security.beta.kubernetes.io/<container_name>
annotation.The following annotations are not propagated:
kubectl.kubernetes.io/last-applied-configuration
clusterbuildstrategy.shipwright.io/*
buildstrategy.shipwright.io/*
build.shipwright.io/*
buildrun.shipwright.io/*
A Kubernetes administrator can further restrict the usage of annotations by using policy engines like Open Policy Agent.
Build Strategies can declare volumes
. These volumes
can be referred to by the build steps using volumeMount
.
Volumes in Build Strategy follow the declaration of Pod Volumes, so
all the usual volumeSource
types are supported.
Volumes can be overridden by Build
s and BuildRun
s, so Build Strategies’ volumes support an overridable
flag, which
is a boolean, and is false
by default. In case volume is not overridable, Build
or BuildRun
that tries to override it,
will fail.
Build steps can declare a volumeMount
, which allows them to access volumes defined by BuildStrategy
, Build
or BuildRun
.
Here is an example of BuildStrategy
object that defines volumes
and volumeMount
s:
apiVersion: shipwright.io/v1beta1
kind: BuildStrategy
metadata:
name: buildah
spec:
steps:
- name: build
image: quay.io/containers/buildah:v1.27.0
workingDir: $(params.shp-source-root)
command:
- buildah
- bud
- --tls-verify=false
- --layers
- -f
- $(params.dockerfile)
- -t
- $(params.shp-output-image)
- $(params.shp-source-context)
volumeMounts:
- name: varlibcontainers
mountPath: /var/lib/containers
volumes:
- name: varlibcontainers
overridable: true
emptyDir: {}
# ...
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.