Fix: Helm Not Working — Release Already Exists, Stuck Upgrade, and Values Not Applied
Quick Answer
How to fix Helm 3 errors — release already exists, another operation is in progress, --set values not applied, nil pointer template errors, kubeVersion mismatch, hook failures, and ConfigMap changes not restarting pods.
The Error
You try to install a chart and it fails immediately:
Error: INSTALLATION FAILED: cannot re-use a name that is still in useOr an upgrade is blocked by a previous failed operation:
Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progressOr your --set overrides aren’t showing up in the deployed resources:
helm upgrade my-app ./chart --set config.debug=true
# But the pod still shows config.debug=falseOr templates fail with a cryptic nil pointer error:
Error: template: my-chart/templates/deployment.yaml:15:
nil pointer evaluating interface {}.replicasEach of these has a specific cause and a specific fix.
Why This Happens
Helm 3 stores release state in Kubernetes secrets — one secret per revision, named sh.helm.release.v1.<release-name>.v<revision>. When a release fails or is interrupted mid-operation, this state can become inconsistent. Helm is strict about state consistency: if a secret exists for a release name, it blocks reinstallation; if a release is marked “pending”, it blocks upgrades.
Template rendering uses Go’s template engine, which fails silently on missing map keys unless you add nil guards. Values passed via --set follow specific escaping rules that differ from YAML — getting them wrong means your values appear correct but don’t apply.
Fix 1: Release Already Exists
Error: INSTALLATION FAILED: cannot re-use a name that is still in useYou’re running helm install but a release with that name already exists in the cluster. Helm stores release history in Kubernetes secrets and helm list only shows active releases by default — the old release may be hidden.
Check all releases including failed/deleted ones:
helm list -n <namespace> # Active only
helm list -n <namespace> --failed # Failed installs
helm list -n <namespace> --all # Everything including uninstalled
helm list -A # All namespacesThe fix: use helm upgrade --install instead of helm install:
# This upgrades if the release exists, installs if it doesn't
helm upgrade --install my-app ./chart -n productionThis is the standard pattern for idempotent deployments — CI/CD pipelines should always use upgrade --install rather than plain install.
If you need to fully replace a broken release:
# Remove the release (keeps history by default in Helm 3)
helm uninstall my-app -n production
# Then install fresh
helm install my-app ./chart -n productionIf helm uninstall itself fails or the release state is corrupted, delete the Helm release secrets directly:
# List all Helm release secrets for the release
kubectl get secrets -n production | grep "sh.helm.release.v1.my-app"
# sh.helm.release.v1.my-app.v1 helm.sh/release.v1 1 5d
# Delete the specific revision secrets
kubectl delete secret sh.helm.release.v1.my-app.v1 -n production
# Now helm install will work
helm install my-app ./chart -n productionPro Tip: Helm 3 stores release metadata in Kubernetes secrets (unlike Helm 2 which used a Tiller pod). This means release state is namespace-scoped and tied to the cluster’s secret API. Deleting these secrets is the nuclear option — you lose rollback history but regain a clean slate.
Fix 2: “Another Operation Is in Progress” — Stuck Release
Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progressA previous Helm operation was interrupted (Ctrl+C, node failure, timeout) and left the release in a pending-install, pending-upgrade, or pending-rollback state. Helm refuses to start a new operation until the pending one resolves.
Check the release history and current state:
helm history my-app -n productionREVISION STATUS CHART DESCRIPTION
1 superseded my-app-1.0.0 Install complete
2 pending-upgrade my-app-1.1.0 Preparing upgrade ← stuck hereOption 1: Roll back to the last successful revision:
# Roll back to the previous stable revision
helm rollback my-app 1 -n production
# Or let Helm find the last successful revision automatically
helm rollback my-app 0 -n production # 0 = previous revisionOption 2: Delete the pending secret to forcibly unlock the release:
# Find the stuck revision number from helm history (e.g., revision 2)
kubectl delete secret sh.helm.release.v1.my-app.v2 -n production
# Verify the release is now in a clean state
helm history my-app -n production
# Retry the upgrade
helm upgrade my-app ./chart -n productionPrevent this from happening by using --atomic on upgrades. If the upgrade fails or times out, --atomic automatically rolls back to the previous state, leaving no pending revisions:
helm upgrade my-app ./chart \
--atomic \
--timeout 10m \
-n production--atomic implies --wait — Helm waits for all resources to become ready before marking the upgrade complete. If anything fails within the timeout, it rolls back and marks the release as failed (not pending).
Fix 3: Values Not Applied — --set Syntax
--set has its own escaping rules that differ from plain YAML. Values appear to be set but don’t show up in rendered templates when the syntax is wrong.
Verify what was actually applied before blaming the chart:
# See the values Helm is actually using for the release
helm get values my-app -n production
# Render templates locally with your --set flags (no cluster needed)
helm template my-app ./chart --set config.debug=trueDots in annotation keys must be escaped with backslashes:
# WRONG — Helm interprets kubernetes.io as nested keys
helm upgrade my-app ./chart \
--set 'annotations.kubernetes.io/ingress-class=nginx'
# CORRECT — escape dots with backslash, use single quotes around the arg
helm upgrade my-app ./chart \
--set 'annotations.kubernetes\.io/ingress-class=nginx'Use single quotes around --set arguments on the shell to prevent the shell from interpreting backslashes, commas, and other special characters.
List values use index notation:
# Setting list items
helm upgrade my-app ./chart \
--set 'tolerations[0].key=node-role.kubernetes.io/master' \
--set 'tolerations[0].operator=Exists'
# Appending to a list already defined in values.yaml requires all items
helm upgrade my-app ./chart \
--set 'extraEnv[0].name=DEBUG' \
--set 'extraEnv[0].value=true'Use --set-string for values that look like numbers. Helm’s YAML parser converts 1.0 to a float, true to a boolean, etc. This breaks image tags and other values that must stay as strings:
# WRONG — image.tag becomes the float 1.0
helm upgrade my-app ./chart --set image.tag=1.0
# CORRECT — stays as the string "1.0"
helm upgrade my-app ./chart --set-string image.tag=1.0
# Relevant for semver-like tags: "2.0", "10", "3.14"
helm upgrade my-app ./chart --set-string image.tag=2.0For complex values, use --values with a YAML file instead of chaining many --set flags:
# overrides.yaml
config:
debug: true
database:
host: db.production.svc.cluster.local
port: 5432
image:
tag: "1.0"
tolerations:
- key: node-role.kubernetes.io/master
operator: Existshelm upgrade my-app ./chart -f overrides.yaml -n production-f and --set can be combined. Multiple -f files are merged left to right; --set overrides everything. This lets you have a base values.yaml, environment-specific overrides.yaml, and then pin individual values via --set.
Fix 4: Template Rendering Errors
Nil pointer errors:
Error: template: my-chart/templates/deployment.yaml:15:
nil pointer evaluating interface {}.replicasThis means your template accessed a value path that doesn’t exist — either because the user didn’t set it or because the values YAML is structured differently than the template expects.
Debug with helm template before deploying:
# Render without installing — safe to run anywhere
helm template my-app ./chart -f values.yaml
# Enable debug output to see exactly what values are available
helm template my-app ./chart -f values.yaml --debug 2>&1 | head -50Fix nil pointer errors in templates by using the default function or wrapping in if:
# WRONG — crashes if .Values.replicas is not set
replicas: {{ .Values.replicas }}
# CORRECT — use default
replicas: {{ .Values.replicas | default 1 }}
# CORRECT — use if for nested values
{{- if .Values.config }}
host: {{ .Values.config.database.host | default "localhost" }}
{{- end }}Run helm lint before committing chart changes:
helm lint ./my-chart
helm lint ./my-chart --strict # Warnings become errors
helm lint ./my-chart -f values.yaml # Lint with specific valuesThe full pre-deploy validation chain:
helm lint ./my-chart && \
helm template release-name ./my-chart -f values.yaml > /dev/null && \
helm upgrade --install release-name ./my-chart -f values.yaml --dry-run --debug--dry-run --debug sends the rendered templates to the Kubernetes API server for schema validation without creating any resources. This catches type errors and missing required fields that helm template alone won’t catch.
Fix 5: kubeVersion Constraint Not Met
Error: INSTALLATION FAILED: chart requires kubeVersion: >=1.24.0-0 which is incompatible
with Kubernetes v1.23.12The chart’s Chart.yaml declares a kubeVersion that your cluster doesn’t satisfy. This protects against deploying charts that rely on APIs not available in your cluster version.
Check your actual cluster version:
kubectl version --short
# Server Version: v1.23.12Option 1: Upgrade your cluster to meet the chart’s requirement (the right fix if the chart uses APIs your cluster doesn’t have).
Option 2: Override the version check if you know the chart is actually compatible (common with pre-release clusters or when the chart author was overly conservative):
helm install my-app ./chart --kube-version v1.24.0 -n productionThis tells Helm to behave as if the cluster is running v1.24.0. It doesn’t change anything on the cluster — it only affects constraint validation during template rendering.
Option 3: If you own the chart, update Chart.yaml:
apiVersion: v2
name: my-chart
version: 1.0.0
kubeVersion: ">=1.23.0-0" # Update to match your actual minimumThe -0 suffix (e.g., 1.24.0-0) is important — it allows pre-release cluster versions like 1.24.0-rc.0 to satisfy the constraint.
Fix 6: ImagePullBackOff After Helm Install
A successful helm install that results in ImagePullBackOff pods isn’t a Helm bug — but Helm values are usually the cause. The wrong image tag, repository, or missing pull secret ends up in the deployed manifest.
Find out what image Helm actually deployed:
helm get manifest my-app -n production | grep -A 2 "image:"This shows the exact image string in the deployed Deployment spec. If it’s different from what you intended, the values override didn’t apply correctly.
Common causes:
Wrong tag —
image.tagdefaulting tolatestinstead of your specified version:helm upgrade my-app ./chart --set-string image.tag=v1.2.3 helm get manifest my-app -n production | grep "image:" # Should now show: image: myregistry.io/myapp:v1.2.3Missing pull secret for a private registry:
# Create the secret in the target namespace kubectl create secret docker-registry regcred \ --docker-server=myregistry.io \ --docker-username=myuser \ --docker-password=mypassword \ -n production # Reference it in Helm values helm upgrade my-app ./chart \ --set 'imagePullSecrets[0].name=regcred' \ -n productionImage doesn’t exist — verify outside of Kubernetes:
docker pull myregistry.io/myapp:v1.2.3See Docker image not found for registry authentication and image naming errors that apply equally here.
For the full breakdown of image pull failure types within Kubernetes, see Kubernetes ImagePullBackOff.
Fix 7: Hook Failures and Stuck Installs
Helm hooks run Jobs or other resources at specific lifecycle points (pre-install, post-upgrade, etc.). If a hook fails, the release can get stuck in a failed state even though no application pods were deployed.
Find what’s failing:
# Check for hook pods in any state
kubectl get pods -n production | grep "my-app"
# View hook job logs
kubectl logs -n production job/my-app-pre-install -f
# Describe the hook pod for events
kubectl describe pod -n production -l "helm.sh/chart=my-chart"Skip hooks entirely when you need to deploy despite a known hook issue:
helm upgrade --install my-app ./chart --no-hooks -n productionIn your chart templates, always set a delete policy on hooks to prevent orphaned hook resources from accumulating:
# templates/pre-install-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "my-chart.fullname" . }}-migrate
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded"
spec:
backoffLimit: 2
template:
spec:
restartPolicy: Never
containers:
- name: migrate
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["python", "manage.py", "migrate"]before-hook-creation deletes the old hook Job before creating a new one on each upgrade, preventing name collisions. hook-succeeded cleans up after success. Failed jobs are preserved for debugging.
Common Mistake: Setting hook-delete-policy: hook-succeeded only — this leaves failed hook Jobs around, which cause name conflicts on the next upgrade attempt. Include before-hook-creation to prevent this.
Fix 8: ConfigMap Changes Not Restarting Pods
After helm upgrade, a new ConfigMap is deployed but running pods still use the old configuration. Kubernetes doesn’t restart pods just because a ConfigMap they reference changed — it only restarts pods when the Pod spec itself changes.
The standard fix: add a checksum annotation to your Deployment template.
When the ConfigMap content changes, the hash changes, the annotation changes, the Pod spec changes, and Kubernetes performs a rolling update:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "my-chart.fullname" . }}
spec:
template:
metadata:
annotations:
# Recalculated on every helm upgrade — triggers rollout when ConfigMap changes
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
envFrom:
- configMapRef:
name: {{ include "my-chart.fullname" . }}-configIf you have multiple ConfigMaps and Secrets:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}Verify the rollout happens after upgrade:
helm upgrade my-app ./chart -n production
kubectl rollout status deployment/my-app -n production
# Waiting for deployment "my-app" rollout to finish: 1 out of 3 new replicas updated...
# deployment "my-app" successfully rolled outManual fix for an already-deployed release without the checksum pattern:
kubectl rollout restart deployment/my-app -n productionThis works for the immediate problem but doesn’t prevent it recurring. Add the checksum annotation to make future upgrades automatic.
For related pod restart issues — pods restarting in a loop after configuration changes — see Kubernetes CrashLoopBackOff.
Still Not Working?
helm diff — Preview Changes Before Applying
The helm-diff plugin shows exactly what would change in the cluster before you run helm upgrade. Install it once, use it everywhere:
helm plugin install https://github.com/databus23/helm-diff
# Preview what upgrade would change
helm diff upgrade my-app ./chart -f values.yaml -n productionThis is indispensable for catching unexpected value changes or accidental resource deletions before they hit production.
Release History and Rollback
Every helm upgrade creates a new revision. You can roll back to any previous state:
helm history my-app -n production # See all revisions
helm rollback my-app 3 -n production # Roll back to revision 3
helm rollback my-app 0 -n production # Roll back to previous revisionCombine with helm get manifest my-app --revision 3 -n production to inspect what was deployed at any historical point.
Debugging What’s Actually in the Cluster
When the live state doesn’t match what Helm says it deployed:
# What Helm thinks is deployed
helm get manifest my-app -n production
# What's actually in Kubernetes (may differ if manually patched)
kubectl get deployment my-app -n production -o yamlDrift between these two indicates someone applied kubectl changes directly without going through Helm, which breaks Helm’s upgrade logic. For pods that never reach Ready state after a Helm install, work through the Kubernetes-level causes — see Kubernetes Pod Pending for scheduling issues and Kubernetes ImagePullBackOff for image errors.
Helm Chart Dependencies
If helm install fails with Error: found in Chart.yaml, but missing in charts/, run:
helm dependency update ./my-chartThis downloads all subcharts listed in Chart.yaml into the charts/ directory. Dependency charts are not included in the chart repository by default — you must run dependency update before packaging or installing.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Terraform Import Error — Resource Not Importable or State Conflict
How to fix Terraform import errors — terraform import syntax, import blocks (Terraform 1.5+), state conflicts, provider-specific import IDs, and importing existing infrastructure.
Fix: Kubernetes HPA Not Scaling — HorizontalPodAutoscaler Shows Unknown or Doesn't Scale
How to fix Kubernetes HorizontalPodAutoscaler issues — metrics-server not installed, CPU requests not set, unknown metrics, scale-down delay, custom metrics, and KEDA.
Fix: nginx Upstream Load Balancing Not Working — All Traffic Hitting One Server
How to fix nginx load balancing issues — upstream block configuration, health checks, least_conn vs round-robin, sticky sessions, upstream timeouts, and SSL termination.
Fix: Kubernetes Secret Not Mounted — Pod Cannot Access Secret Values
How to fix Kubernetes Secrets not being mounted — namespace mismatches, RBAC permissions, volume mount configuration, environment variable injection, and secret decoding issues.