I am currently using garden to keep a staging environment up to date in our CI. (Github Actions)
Consecutive runs are "unaware" of each other and helm deployments fail because the k8s resources already exist.
As a dirty fix, I added a step that removes the entire staging namespace before each run.
I am aware that pushing the .garden folder to the repository would solve this as the context from previous runs is there.
But we use the same garden project for individual dev environments, and having a .garden folder pushed to the repo would therefore cause problems.
Is there a clean way to solve this problem?
12/21/2022, 12:48 PM
This feels like an anti-pattern to me but I'm curious what @glamorous-caravan-26619 would have to say on the subject?
12/29/2022, 11:23 AM
@quaint-dress-831@glamorous-caravan-26619 Thanks for answering. I'm curious to know which part feels like an anti-pattern. The fact that the separate runs would share the same Garden runtime files? Is Garden maybe not suited for maintaining and updating a static non ephemeral environment (like the mentioned staging enironment) from the context of a cicd system?
I tried deleting the .garden folder and running a deployment again locally, and it picks up the existing deployments and does not throw the helm/kubernetes error somehow :(. I seem to have misunderstood alot of things.
12/29/2022, 1:38 PM
Let me know if we should all get on a call and help you grok!! That's what we're here for
12/30/2022, 11:14 AM
It turns out the problem was with a terraform module I had that deploys kubernetes secrets. I mistook the logs, thought they were garden logs. So I just had to upload the state for the module to a Cloud Storage bucket. Turns out the .garden folder does not have any sort of state that is relevant for consecutive runs. Sorry for the big confusion, I am clearly still learning.