In-cluster-building cluster-buildKit ssh credentials

We are testing out the in-cluster building with build kit. I cannot seem to find any documentation on how we would provide ssh credentials to the garden-buildkit pod so that it can access private repositories during a go or nodejs dependency pull.
We have the build loop operating but are hitting
“error: invalid empty ssh agent socket, make sure SSH_AUTH_SOCK is set”

We were thinking of using kubemod to get ahold of the components and before deployment and apply the necessary environment variables.

Any suggestions?
Thanks!

Hello @aaron ,

Sorry for the late reply. So this is a bit tricky with remote in-cluster building.
Buildkit has the option to mount a local SSH socket during build:

This is great if you build locally, but I cannot see how that would work when using a remote kubernetes cluster for building.
Did you consider to set the SSH_AUTH_SOCK env var with kubemod? And what value did you want to pass?

Our other supported in-cluster build solution kaniko offers the option to pass a personal access token to the build container:

But this is only one access token and is more designed for the use case of the build context residing in a private repository.
If your dependencies come from several private repositories, I am not sure this is currently solvable with either buildkit or kaniko.
Another solution could be to split up your Dockerfile into multi-stage builds and pass the respective access token as a buildArg to the specific stage that needs them. You can pass buildArgs with garden.

As docker ARGS will not be preserved over the stages, they should not end up in the final image. But I agree, that it would be better to take care of auth in the build container and not in the Dockerfile.

All that being said, as you can see I cannot really come up with an ideal solution to your problem, but I’d be happy to bounce ideas back and forth :slight_smile: .
Anna