blue-lunch-98125
01/12/2023, 3:09 PMCommand "docker exec -i crictl images --output=json docker.io/library/pgbouncer:v-67ed3bcc63" failed with code 1:
Seems like maybe it is missing the correct image name? This is running under debian. There's nothing interesting about the module or dockerfile afaict. Any ideas what might be going on here?curved-farmer-35040
01/12/2023, 9:19 PMbright-policeman-43626
01/13/2023, 5:31 PMunable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "APIService" in version "apiregistration.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
unable to recognize "STDIN": no matches for kind "ValidatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
Docker version: docker --version
Docker version 20.10.20, build 9fdeb9c
Also another question, do you guys have experiences by using Garden to generate Custom certificates self-signed? We need TLS but our installations are ephemeral (shared between multiple pc's as everything works locally).mammoth-flag-56137
01/16/2023, 3:38 AMℹ Remaining tasks 0
ℹ Remaining tasks 0
Done!
Done flushing all events and log entries.
success: true
result:
*snip*
dependencyResults:
get-service-status.infrastructure-aws:
type: get-service-status
key: get-service-status.infrastructure-aws
name: infrastructure-aws
description: >-
getting status for service 'infrastructure-aws' (from module
'infrastructure-aws')
completedAt: '2023-01-16T03:10:10.798Z'
batchId: 2b6e44f4-32f1-42a1-b55a-98ffeed8d8b0
output:
state: ready
version: v-2570f839d4
outputs:
access_key: x <<<<<< sensitive
connect_client_registry: >-
x
connect_db_password: x <<<<<< sensitive
connect_s3_document: >-
x
connect_s3_export: >-
x
connect_server_registry: >-
x
keycloak_admin_password: x <<<<<< sensitive
keycloak_db_password: x <<<<<< sensitive
keycloak_registry: >-
x
kms_key_id: >-
x
one_client_registry: >-
x
one_db_password: x <<<<<< sensitive
one_pricing_registry: >-
x
one_s3_document: >-
x
one_s3_export: >-
x
one_s3_pricing_backup: >-
x
one_server_registry: >-
x
pricing_db_password: x <<<<<< sensitive
secret_key: x <<<<<< sensitive
user_arn: >-
x
blue-lunch-98125
01/17/2023, 6:03 PMkind: Module
description: ebtflask
type: container
name: ebtflask
image: efs-server
build:
timeout: 3600
services:
- name: ebtflask
ports:
- name: http
containerPort: 5000
dependencies:
- kafka-broker
- pgbouncer
- redis
command:
- nodemon
- -e
- py
- --exec
- './manage.py runserver --disable_reloader'
- --config
- freshebt/nodemon.json
env:
EFS_DB_HOST: pgbouncer
EFS_REDIS_HOST: redis
OPENSEARCH_HOST: opensearch
exclude:
- devops/images/**/*
---
kind: Module
description: celery-beat
type: container
name: celery-beat
image: efs-server
build:
dependencies:
- name: ebtflask
services:
- name: celery-beat
env:
EFS_DB_HOST: pgbouncer
EFS_REDIS_HOST: redis
dependencies:
- pgbouncer
- redis
command:
- nodemon
- -e
- py
- --exec
- 'celery beat -A freshebt.app:celery --pidfile= --schedule /var/lib/celery-beat.db --loglevel=INFO'
- --watch
- freshebt
include: []
Where the second module simply reuses the container image built in the first, but with a different command.mammoth-kilobyte-41764
01/17/2023, 11:30 PMfamous-afternoon-28388
01/18/2023, 9:55 AMtests:
- name: unit
command: ["pytest", "tests"]
but the relevant file isn't on the pod. Is what I'm looking for possible, or do I need to find some workaround?quaint-librarian-55734
01/19/2023, 11:24 PMjava -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005 -cp /app/app.jar
This starts up the Java server and also listens for incoming remote debugger client connections on port 5005. Then, in our garden.yaml for the service we have configuration like so:
services:
- name: "service-x"
command:
- "java"
- "-cp"
- "/app/app.jar"
devMode:
command:
- "java"
- "-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005"
- "-cp"
- "/app/app.jar"
sync: []
ports:
- name: http
containerPort: 80
servicePort: 80
localPort: 8080
- name: debug-port
containerPort: 5005
servicePort: 5005
localPort: 10000
When launched in devMode, the port forwarding for port 10000 is in place, however multiple attempts to connect the remote debugger result in various forms of error "Unable to open debugger port (localhost:100000): java.io.IOException" and sometimes this includes "handshake failed - connection prematurely closed".
If we keep retrying this, eventually it will successfully connect. Once connected, it will usually be able to re-connect again reliably after a disconnect for the duration of the pod's life.
Initially - my thought was that there was some sort of lag between when the service was responding on port 80 (meaning the garden health check saw the service as up) and when the remote debugger was able to accept connections. HOWEVER - when I tried directly creating a kubectl port forward using a command like kubectl port-forward service/service-x 50005:10001
the debugger is immediately and reliably able to connect as soon as the service is deployed.kind-france-89777
01/20/2023, 1:26 AMrough-kilobyte-44808
01/23/2023, 5:37 PMmammoth-flag-56137
01/25/2023, 3:07 AMone:
config:
name: "defaultvalue"
foo: "bar"
if I have a config for an environment like this
clientenv.yaml
one:
config:
# foo: "bar2"
then when referencing {$var.one.config.name}
in a kubernetes deployment manifest I get an error
errors:
- detail:
results:
task.one-migrate:
type: task
description: running task one-migrate in module one-server
key: task.one-migrate
name: one-migrate
error:
detail:
err:
detail:
value: null
nodePath: []
fullPath: var.one.config.name
opts:
allowPartial: true
unescape: true
stack: []
type: configuration
type: template-string
Is this the intended behaviour that having the config:
section without anything in it is effectively doing config: {}
and blanking/nulling any default values?nutritious-kitchen-29367
01/31/2023, 9:01 AMbig-zoo-27589
01/31/2023, 7:07 PMbored-grass-81679
02/03/2023, 9:11 AMtests:
- name: integration
dependencies: [secrets, ic-be-helm-ci-integration]
disabled: ${ !var.integration }
timeout: 600
command:
- /bin/sh
- '-c'
- 'python manage.py test instance_api/tests/integration --keepdb'
resource:
podSelector:
layer: backend
type: controller
app: lex
kind-carpet-47197
02/09/2023, 8:57 AM--interactive
as an option for the command garden run service
. Is there a new way of running a service interactively that hasn't been documented yet?chilly-waitress-62592
02/09/2023, 3:37 PMvalues
section of a helm module.
"${ environment.name != 'production' ? 20 : -1 }"
I'm assuming it's something silly I'm doing.quick-answer-45507
02/13/2023, 1:49 PMpullSecrets:
nexus:
registry: nexus-docker-out.build-tools.domain-production.com
serviceAccounts:
- default
careful-evening-79580
02/15/2023, 2:12 PMaloof-lamp-69262
02/15/2023, 4:14 PMwide-nest-40830
02/20/2023, 7:29 PMmammoth-flag-56137
02/21/2023, 3:26 AM--server-side
to kubectl apply
when applying kubenetes manifests?
https://kubernetes.io/docs/reference/using-api/server-side-apply/
I'm trying to create a ~1mb configmap with garden and because of limitations with kubectl apply it fails due to being bigger than ~256kbquick-answer-45507
02/21/2023, 4:37 PMproviders:
- name: local-kubernetes
context: minikube
environments: [dev]
defaultHostname: "whenidev.net"
setupIngressController: false
deploymentRegistry:
hostname: nexus-docker-out.build-tools.DOMAIN-production.com
namespace: default
imagePullSecrets:
- name: gitlab
copySecrets:
- name: gitlab
but when i try to apply a kube manifest that refrences an image from that repo with imagePullSecrets:
- name: gitlab
defined i still get failed to pull image no basic auth credentials...
any ideas?wonderful-table-85939
02/22/2023, 7:54 PMagreeable-grass-30619
02/23/2023, 5:10 PMPod worker-7866cf9ff5-7skfq: Failed - Failed to pull image "127.0.0.1:5000/vote-demo-quickstart/worker:v-e2e51487b4": rpc error: code =
Unknown desc = failed to pull and unpack image "127.0.0.1:5000/vote-demo-quickstart/worker:v-e2e51487b4": failed to resolve reference
"127.0.0.1:5000/vote-demo-quickstart/worker:v-e2e51487b4": failed to do request: Head
"https://127.0.0.1:5000/v2/vote-demo-quickstart/worker/manifests/v-e2e51487b4": http: server gave HTTP response to HTTPS client
Pod worker-7866cf9ff5-7skfq: Failed - Error: ErrImagePull
Pod worker-7866cf9ff5-7skfq: BackOff - Back-off pulling image "127.0.0.1:5000/vote-demo-quickstart/worker:v-e2e51487b4"
Pod worker-7866cf9ff5-7skfq: Failed - Error: ImagePullBackOff
Does the registry require tls?
Can I provide it cert-manager issuer or something?
It seems you are missing a page referenced in your documentation also. I would have liked to have it.
On this page https://docs.garden.io/kubernetes-plugins/remote-k8s/configure-registry
This (non) page is referenced https://docs.garden.io/guides/in-cluster-buildingtall-greece-98310
03/01/2023, 11:24 AMcrds
directory alongside the typical templates directory. It doesn't seem garden propagates this directory into the .garden/build/
hierarchy, so the crds are hence not installed.
Is this intended or a bug? If intended how are you supposed to get CRDs installed?
Thanks!blue-kite-93685
03/03/2023, 10:48 AMnutritious-kitchen-29367
03/03/2023, 12:36 PMfresh-yak-35965
03/03/2023, 11:52 PMtlsCertificate.secretRef
that wants a k8s secret. How do I do this?tall-greece-98310
03/06/2023, 11:02 AMrun task
that does a cat
of the file, then capture this via task output.log and I can then use this as inputs for other tasks etc.
It turns out that the task script is run in a separate container, that does not have the required volume mounted, and hence the approach fails.
Any tips to accomplish what I want? Any details on how the task container is specified to run? Does it run as a separate container in same pod or something like that?brief-dawn-88958
03/07/2023, 2:02 PMgarden update-remote source --parallel
(via workflow), the relevant log looks like in attached file garden.log
.
So far, so good.
However, sometimes -- maybe every 30th run -- the module resolution mysteriously fails. See attached garden-error.log
.
Please note that in the successful run there are 44 modules; in the failed run garden only finds 39: 5 are missing.
When retrying the failed pipeline, it normally works.
I'd be really happy to eliminate this flakiness in our CI 🙂
Do you have any idea what is going on?