Helm charts redeploy even when unchanged
# 🌱|help-and-getting-started
m
https://github.com/garden-io/garden/issues/3473 This bug again, I was hoping that moving to 0.13 would have stopped this happening but it is still happening.
q
Hi @curved-intern-91221, this has been an issue since as long as I can remember. Any chance we can get some dev eyeballs on this one?
This issue was re-opened 45 minutes ago, we'll take a look!
f
Hi @mammoth-flag-56137 , are you using actions or modules?
m
actions
f
Thanks! Just tried to reproduce, but works for me. Could you provide a reproducible example?
m
i believe the problem comes from having garden variables in the chart values per https://github.com/garden-io/garden/issues/3473#issuecomment-1369390015
@sparse-easter-31125 can you construct a repro of this when you get a minute
s
Without variables
Copy code
ℹ deploy.postgresql         → [silly] GET https://127.0.0.1:36447/apis/networking.k8s.io/v1/namespaces/deployment-local-bryankwok/networkpolicies/postgresql
ℹ deploy.postgresql         → [silly] GET https://127.0.0.1:36447/apis/networking.k8s.io/v1/namespaces/deployment-local-bryankwok/networkpolicies/postgresql-read
✔ deploy.postgresql         → Already deployed
ℹ deploy.postgresql         → [verbose] Deploy type=helm name=postgresql status is ready.
ℹ deploy.postgresql         → [silly] Completing node deploy.postgresql:status. aborted=false, error=null
[silly] GraphSolver: loop
[silly] Request deploy.postgresql:request:611aae0a-21a0-4b22-a0e3-d3934ef80ae1 has ready status and force=false, no need to process.
ℹ deploy.postgresql         → [silly] Completing node deploy.postgresql:request:611aae0a-21a0-4b22-a0e3-d3934ef80ae1. aborted=false, error=null
With variables
Copy code
ℹ deploy.postgresql         → Deploying postgresql (type helm) at version v-fdfdbc3ae2...
ℹ deploy.postgresql         → [silly] Getting 'deploy' handler for Deploy type=helm name=postgresql
ℹ deploy.postgresql         → [silly] Calling deploy handler for action Deploy type=helm name=postgresql
ℹ deploy.postgresql         → [silly] Wrote chart values to /tmp/ecd1dc85d8ebf064a39b8d90555d748a
ℹ deploy.postgresql         → [silly] Execing '/home/bryankwok/.garden/tools/helm/19e44f24232cfce8/linux-amd64/helm --kube-context kind-kind --namespace deployment-local-bryankwok dependency update /home/bryankwok/glx_gitlab_repos/garden-local/postgresql/postgresql' in /home/bryankwok/.garden/tools/helm/19e44f24232cfce8/linux-amd64
ℹ deploy.postgresql         → [verbose] [helm] Hang tight while we grab the latest from your chart repositories...
[silly] Calling Cloud API with POST events
[silly] Retrieving client auth token from config store
ℹ deploy.postgresql         → [verbose] [helm] ...Successfully got an update from the "bitnami" chart repository
ℹ deploy.postgresql         → [verbose] [helm] Update Complete. ⎈Happy Helming!⎈
garden.yml
Copy code
kind: Deploy
name: postgresql
type: helm
description: PostgreSQL Helm chart

spec:
  chart:
    path: ./postgresql
  # values:
  #   auth:
  #     username: ${var.username}
  #     password: ${var.password}
i commented out the values section but the difference is there between uncommenting vs commenting that section with variables
@freezing-pharmacist-34446
I would also post the manifest output that gets shown as part of the output when there are variables but it would be too long here
f
So the postgres chart is not cached with variables in the values for you? I tried to reproduce it with this helm action and when i run garden deploy for the second time it is cached:
Copy code
kind: Deploy
type: helm
name: redis
description: Deploy redis
spec:
  chart:
    url: oci://registry-1.docker.io/bitnamicharts/redis
  values:
    architecture: replication
    sentinel:
      enabled: true
    auth:
      enabled: true
      password: ${var.redisPassword}
      sentinel: true
    replica:
      replicaCount: 2
      resources:
        limits:
          cpu: 500m
          memory: 1Gi
        requests:
          cpu: 100m
          memory: 128Mi
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: ScheduleAnyway
Btw i am not doubting your experience, just trying to figure out which scenario leads to this behavior exactly
s
that is correct
I don't really have any other test data at the moment. I'll make another attempt later to see if there's anything similar