Skip to content

Troubleshooting

Something broke — or rather, something revealed itself, because it was always broken. You merely summoned it into the light.

The disciple descended into the crypt seeking answers, and found only mirrors — each reflecting a different mistake, each older than the last.

Book of Eibon, On Descending (don't quote me on this)

Installing Helm and Helmfile

helmfile2compose needs helm and helmfile to render manifests. Yes, you need the thing you're trying to escape from. The irony is not lost on anyone.

Package manager

Some package managers already have both: brew install helm helmfile on macOS, pacman -S helm helmfile on Arch. If that works, skip the manual install below.

Debian/Ubuntu don't package them — install manually:

Helm:

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-4 | bash

Auto-detects OS/arch, downloads, verifies checksum, installs to /usr/local/bin. Verify: helm version should print v4.x.

Helmfile:

curl -sL https://github.com/helmfile/helmfile/releases/download/v1.3.2/helmfile_1.3.2_linux_amd64.tar.gz | tar -xzf - helmfile
sudo mv helmfile /usr/local/bin/
# ↑ Version may be stale — check https://github.com/helmfile/helmfile/releases/latest

Verify: helmfile --version should print v1.x. Other platforms: check the release assets for your OS/arch.

nerdctl compose does not work

nerdctl compose silently ignores networks.*.aliases — the key that makes K8s FQDNs resolve in compose. Without it, every service that references another by its K8s DNS name (svc.ns.svc.cluster.local) will fail to connect.

The fix: install the flatten-internal-urls transform. It strips all network aliases and rewrites FQDNs to short compose service names, which nerdctl resolves natively. No runtime change needed.

python3 dekube-manager.py flatten-internal-urls

If you are running Rancher Desktop with containerd: switching to dockerd (moby) in Rancher Desktop settings avoids the problem entirely — one checkbox, one VM restart. See Limitations — Network aliases for the full list of workarounds and caveats (including cert-manager compatibility).

Chart-specific issues

If the issue is specific to a Helm chart rather than helmfile2compose itself — a sidecar that needs the K8s API, a container that expects a CRD controller at runtime, an image that phones home to the apiserver on startup — check the known workarounds. Those are sushi recipes for tentacles that don't fit, organized by chart.

If the chart genuinely needs a kube-apiserver at runtime (leader election, service discovery via API, k8s-sidecar watchers), the fake-apiserver extension can provide one — a fake one, backed by a Python script, self-signed certs, and questionable life choices. Install it, and the problem goes away. Whether it's replaced by a worse problem is a matter of perspective.

Network alias collisions (multi-project)

Your stack works half the time and breaks the other half? Services resolve to the wrong container? One request succeeds, the next returns someone else's login page?

When multiple helmfile2compose projects share the same Docker network (network: shared-infra in dekube.yaml), every network alias from every service in every project lands on the same DNS namespace. Every FQDN, every short name, every cursed .svc.cluster.local suffix — all of them, cohabiting in a flat network.

The FQDNs are mostly safe — K8s namespaces are baked into the names, so redis.stoatchat-redis.svc.cluster.local and redis.lasuite-redis.svc.cluster.local resolve to different containers.

The short aliases do not. When a K8s Service name differs from its compose service name (e.g. keycloak-service → compose service keycloak), the K8s name is added as a short alias. If two projects register the same short alias on the same network, Docker resolves it via round-robin between both containers. Silent. Random.

How to diagnose: inspect the network aliases on your running containers:

docker inspect <container> --format '{{ .NetworkSettings.Networks }}'

If two containers from different projects share the same alias on the same network, you found it. See Advanced — multi-project for the setup that avoids this.

Ingress rules are missing from Caddyfile

The Caddyfile is empty, or some Ingress manifests are silently skipped. This means no IngressRewriter matched those manifests.

Diagnose:

  1. Check stderr output — look for "unknown ingressClassName" or "no rewriter matched" warnings.
  2. Verify which ingress controller your Ingress manifests use: ingressClassName in the spec, or kubernetes.io/ingress.class annotation.
  3. Check that the matching rewriter is installed:
Controller Rewriter How to install
HAProxy bundled already included
Nginx dekube-rewriter-nginx python3 dekube-manager.py nginx
Traefik dekube-rewriter-traefik python3 dekube-manager.py traefik
  1. If your cluster uses custom class names (e.g. haproxy-internal, nginx-dmz), add a mapping in dekube.yaml:
ingress_types:
  haproxy-internal: haproxy
  nginx-dmz: nginx

Without this mapping, helmfile2compose won't recognize the class and the Ingress is skipped.

  1. If your controller isn't listed above (Contour, Ambassador, Istio, AWS ALB...), basic host/path/backend routing still works, but controller-specific annotations won't translate. Consider writing a rewriter.

Exit codes

Code Meaning
0 Success
1 Fatal error (bad config, missing helmfile, extension conflict)
2 Empty output — no services generated (everything excluded or no convertible manifests)

Useful for generate-compose.sh or CI: python3 helmfile2compose.py ... || exit $?

Still stuck? Open an issue — include the error, your dekube.yaml, and which extensions you're using.

Problem Where to open an issue
Conversion output is wrong, stack doesn't boot helmfile2compose
A specific extension misbehaves The extension's own repo (see catalogue)
Engine contract bug, pipeline issue dekube-engine
dekube-manager can't install something dekube-manager
Not sure where to file helmfile2compose — I'll triage