Different behavior in CI pipeline and local run
# 🌱|help-and-getting-started
f
Hi I am trying to deploy a helm deployment on our Kubernetes cluster from a Jenkins pipeline to run some integration tests. For this, I use an existing chart which I have tested well while running garden from my machine. But when running in Jenkins, I had to create a new cluster admin role with specific permissions (see snippet attached) that I don't have to use while running locally due to my user having complete cluster access. I created this to provide kubectl access to Jenkins. I am getting an error that has my head scratching
Copy code
ℹ deploy.gdp-garden [verbose] → [kubernetes-plugin] Status of Deployment dp-ci-1691586975313-gdp is "deploying"

⚠ deploy.gdp-garden → Deployment/dp-ci-1691586975313-gdp: Error: failed to generate container "5619a71419b472824c14778e2632e2ed39f0571748f98e54023cf05c87e231cf" spec: failed to generate spec: no command specified

ℹ deploy.gdp-garden [verbose] → [kubernetes-plugin] Deployment/dp-ci-1691586975313-gdp: Error: failed to generate container "5619a71419b472824c14778e2632e2ed39f0571748f98e54023cf05c87e231cf" spec: failed to generate spec: no command specified
A CMD is present inside the Dockerfile to build the container used in this deployment, so our helm chart does not have any command specified for the deployment, and it has always worked in the past (and when I deployed the chart locally using garden). I could add a command to the manifest or helm values to bypass this error. But I am wondering why there is a difference in behaviors. Am I missing some permission in the ClusterRole?

https://cdn.discordapp.com/attachments/1138973829117526106/1138973829255933962/clusterrole.png

Update: with my team I noticed that it could be related to the docker version that garden runs. I think this is specifically a problem with docker 24.0.x. Is there any way I can pass a path for the specific docker client that is already present on the system, same as kubectlPath?
s
Hi @flaky-plumber-11624—I'm not quite sure what's the underlying cause here (I'd probably have to take a closer look at the chart manifests and Dockerfile to get a better idea of that). But as to the Docker version, using in-cluster building via the
cluster-buildkit
or
kaniko
build modes might be useful to narrow this down (from your description, I got the impression you're doing local Docker builds both in CI and in dev). This way, you'd get the same build behaviour in dev and CI (since both would be doing in-cluster builds in the remote cluster). If there's still a problem with the deployment at that point, then maybe that helps narrow this down a bit.
In the meantime, could you maybe share any relevant (and non-sensitive) parts of the Dockerfile and Helm manifests in question? Maybe we'll notice something that could be helpful here.
45 Views