Helm release going on indefinitely
# 🌱|help-and-getting-started
b
Hey y'all I'm having this issue where my release in GCP just gets hanging for a long time:
Copy code
ℹ api-demographics [silly]  → Getting the release status for api-demographics
ℹ api-demographics [silly]  → Execing '/Users/jhan.silva/.garden/tools/helm/e19cd5906fb2b863/darwin-arm64/helm --kube-context gke_develop_us-east1_dev-1s --namespace local-env status api-demographics --output json' in /Users/jhan.silva/.garden/tools/helm/e19cd5906fb2b863/darwin-arm64
ℹ api-demographics [silly]  → Installing Helm release api-demographics
ℹ api-demographics [silly]  → Execing '/Users/jhan.silva/.garden/tools/helm/e19cd5906fb2b863/darwin-arm64/helm --kube-context gke_develop_us-east1_dev-1s --namespace local-env install api-demographics /Users/jhan.silva/Documents/2024/projects/the-resurrection-of-garden/api/.garden/build/api/base-chart/ --namespace local-env --timeout 300s --values /Users/jhan.silva/Documents/2024/projects/the-resurrection-of-garden/api-demographics/.garden/build/api/base-chart/garden-values.yml --atomic' in /Users/jhan.silva/.garden/tools/helm/e19cd5906fb2b863/darwin-arm64
Not even with --log-level silly I can detect what's going on, it seems to be a Helm problem, but I have other deployment that gets deployed with remote sources that works just fine. Version
0.12.61
This is all the errors I get:
Copy code
stderr: >-
  Error: INSTALLATION FAILED: release api failed, and has been
  uninstalled due to atomic being set: context deadline exceeded
all: >-
  Error: INSTALLATION FAILED: release api failed, and has been
  uninstalled due to atomic being set: context deadline exceeded
failed: true
timedOut: false
isCanceled: false
killed: false

✖ tasks [silly]             → Failed task deploy.api.622b4153-da43-11ee-bb7c-c78632a60729
â„ą [silly] Remaining tasks 0
â„ą [silly] Remaining tasks 0
[debug] Done flushing all events and log entries.
[silly] Tracking Command Result event.
Payload:
  {"anonymousId":"7a0144cc-3fed-433f-95d7-0bf4c6034937","event":"Command Result","properties":{"projectId":"c347ba6530f99b8f4ae2b8df38b35be91375355f36e02487e57919abc7a02210b3c4baeb3e7e57bec5d0dee00b8ff3952f5bfc0d44b31b741a25bb30fe2100a3","projectIdV2":"warm-obese-moment_c347ba6530f99b8f4ae2b8df38b35be9","projectName":"0c186646287f11d39d822586a581b7fa247667a54d25d5567abb2f4b849786fcb1307812a50d61409898f469cf81209183707cc85b6714fbaf6be829f44ba65d","projectNameV2":"juicy-cheap-sandwich_0c186646287f11d39d822586a581b7fa","enterpriseDomain":"2769c2abae62151b2ebb8658628f7c5f5d0dc0c29fdefdd19a23dec9cb0a7b96d74d82512d1f6906bef65b24a29d84685dfd2fd66964a56fbdaff39fabd69206","enterpriseDomainV2":"ratty-willing-science_2769c2abae62151b2ebb8658628f7c5f","isLoggedIn":false,"ciName":null,"system":{"platform":"darwin","platformVersion":"23.3.0","gardenVersion":"0.12.61"},"isCI":false,"sessionId":"3fa5bf83-cb7b-489d-9201-a08283c74cb5","projectMetadata":{"modulesCount":5,"moduleTypes":["container","helm"],"tasksCount":4,"servicesCount":3,"testsCount":0},"firstRunAt":"Tue, 05 Dec 2023 06:17:08 GMT","latestRunAt":"Mon, 04 Mar 2024 16:22:21 GMT","isRecurringUser":true,"name":"deploy","durationMsec":320901,"result":"failure","errors":["runtime"]}}
1 deploy action(s) failed!
[silly] Error: 1 deploy action(s) failed!
    at handleProcessResults (/snapshot/project/tmp/pkg/cli/node_modules/@garden-io/core/src/commands/base.ts:532:19)
    at DeployCommand.action (/snapshot/project/tmp/pkg/cli/node_modules/@garden-io/core/src/commands/deploy.ts:292:32)
    at GardenCli.runCommand (/snapshot/project/tmp/pkg/cli/node_modules/@garden-io/core/src/cli/cli.ts:537:20)
    at GardenCli.run (/snapshot/project/tmp/pkg/cli/node_modules/@garden-io/core/src/cli/cli.ts:698:26)
    at Object.runCli (/snapshot/project/tmp/pkg/cli/src/cli.ts:41:14)
b
Hi @bright-policeman-43626 This is actually an issue with Helm, where it gets stuck with the --atomic flag: https://stackoverflow.com/questions/65006907/kubernetes-helm-stuck-with-an-update-in-progress One workaround in Garden is disabling the --atomic flag in the helm action: https://github.com/garden-io/garden/issues/2713
b
Thank you @big-spring-14945 that was actually it, I don't know why it would just die like that. The weird thing is that I'm deploying two services (one parent execution and the child module gets invoked with the remoteSources (in the project configuration). And the Helm release ended up hanging only for the first release but the application that was invoked with the remote source didn't get the problem. But is giving me this error when starting the container:
Copy code
│ migrations exec /bin/sh: exec format error
I'm using local-docker to build the code, my guess is that the container is getting built with a wrong architecture
b
@bright-policeman-43626 are you using local-docker and share a remote registry with other people or other devices that use a different architecture? I think Garden still has a limitation where the cache key does not take into account the architecture for docker builds.
What happens if you force building using deploy --force-build?
b
Hey @big-spring-14945, yes I’m using local-docker, the remote registry is only used by me at the moment, if I re-deploy using force-build I still get the same issue!
@big-spring-14945 fixed this, problem was the —atomic flag as well, after adding it everything worked as expected lol…
b
Did you recently change the CPU architecture of your work station? Adding the atomic flag only changed the Garden version of your action, and thus it got rebuilt locally which made it work for you. Your cache is still poisoned for people with different architectures. This is a comment describing the issue on GitHub: https://github.com/garden-io/garden/issues/3106#issuecomment-1327542411
b
Yeah, I think something triggered a rebuild and that worked, definitely architectures are hard.. I’m probably gonna try in-cluster building instead to remove those issues
2 Views