Skip to content

Fix: Kubernetes Secret Not Mounted — Pod Cannot Access Secret Values

FixDevs ·

Quick Answer

How to fix Kubernetes Secrets not being mounted — namespace mismatches, RBAC permissions, volume mount configuration, environment variable injection, and secret decoding issues.

The Problem

A Pod can’t access a Kubernetes Secret:

Error: secret "db-credentials" not found

Or the Pod fails to start because a Secret referenced in envFrom or volumes doesn’t exist:

Warning  Failed     3s    kubelet  Error: secret "api-keys" not found
Warning  BackOff    1s    kubelet  Back-off restarting failed container

Or the Secret exists but the mounted file contains garbled data:

# Inside the container:
cat /secrets/password
# dXNlcjpwYXNzd29yZA==   ← Base64-encoded, not the actual value

Or a Pod in one namespace can’t access a Secret from another namespace:

Error from server (NotFound): secrets "shared-secret" not found

Why This Happens

Kubernetes Secrets have several gotchas:

  • Namespace isolation — Secrets are namespace-scoped. A Pod can only access Secrets in the same namespace. There is no cross-namespace Secret sharing by default.
  • Secret must exist before the Pod — if a Pod references a Secret in volumes or envFrom and the Secret doesn’t exist, the Pod fails to start. Kubernetes does not wait for the Secret to be created.
  • Base64 encoding — Secret data values are base64-encoded. If you paste a base64 value into stringData (which expects plain text), it gets double-encoded when mounted.
  • Case-sensitive keys — Secret key names are case-sensitive. DB_PASSWORD and db_password are different keys.
  • RBAC restricting Secret access — in hardened clusters, ServiceAccounts may not have permission to read Secrets. The Kubelet reads Secrets on behalf of Pods, but RBAC policies can block this.
  • Immutable Secrets — Secrets marked immutable: true can’t be updated. A new Secret with a different name must be created.

Fix 1: Verify the Secret Exists in the Right Namespace

# Check if the Secret exists
kubectl get secret db-credentials -n my-namespace

# List all Secrets in the namespace
kubectl get secrets -n my-namespace

# Describe the Secret to see its keys (values are hidden)
kubectl describe secret db-credentials -n my-namespace

# Output shows:
# Name:         db-credentials
# Namespace:    my-namespace
# Labels:       <none>
# Type:         Opaque
# Data
# ====
# password:  16 bytes
# username:  5 bytes

Check the Pod’s namespace matches the Secret’s namespace:

# Get the Pod's namespace
kubectl get pod my-pod -o jsonpath='{.metadata.namespace}'

# Get the Secret's namespace
kubectl get secret db-credentials -o jsonpath='{.metadata.namespace}'

# Both must match

Secrets can’t cross namespaces — if you need a Secret in multiple namespaces, copy it:

# Copy a Secret from one namespace to another
kubectl get secret shared-secret -n source-ns -o yaml | \
  sed 's/namespace: source-ns/namespace: target-ns/' | \
  kubectl apply -f -

# Or use kubectl's --namespace flags
kubectl get secret shared-secret -n source-ns -o json | \
  jq 'del(.metadata.resourceVersion, .metadata.uid, .metadata.creationTimestamp, .metadata.namespace)' | \
  kubectl apply -n target-ns -f -

Fix 2: Create the Secret Correctly

From literal values (most common):

# Create Secret with literal key-value pairs
kubectl create secret generic db-credentials \
  --from-literal=username=myuser \
  --from-literal=password=mysecretpassword \
  -n my-namespace

# Verify it was created
kubectl get secret db-credentials -n my-namespace -o yaml
# data values are base64-encoded — that's expected

From a file:

# Create Secret from files (file content becomes the value)
kubectl create secret generic tls-certs \
  --from-file=tls.crt=./certs/server.crt \
  --from-file=tls.key=./certs/server.key \
  -n my-namespace

Using YAML manifest:

# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
  namespace: my-namespace  # ← Must match the Pod's namespace
type: Opaque
stringData:                # ← Use stringData for plain text (auto-encoded)
  username: myuser
  password: mysecretpassword
  # Don't base64-encode here — stringData handles it automatically
kubectl apply -f secret.yaml

Common Mistake: Manually base64-encoding values and putting them in stringData. The stringData field accepts plain text and encodes automatically. If you put dXNlcjpwYXNzd29yZA== in stringData, it stores the base64 string literally (and then re-encodes it). Use data for pre-encoded values, stringData for plain text.

# Correct use of data vs stringData:
data:
  password: bXlzZWNyZXRwYXNzd29yZA==  # base64 of "mysecretpassword"

stringData:
  password: mysecretpassword  # Plain text — Kubernetes encodes it

Fix 3: Mount Secret as Environment Variables

Using env (individual keys):

# deployment.yaml
spec:
  containers:
    - name: app
      image: my-app:latest
      env:
        - name: DB_USERNAME
          valueFrom:
            secretKeyRef:
              name: db-credentials   # Secret name
              key: username          # Key within the Secret
              optional: false        # Pod fails if Secret/key doesn't exist
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: password

Using envFrom (all keys from a Secret):

spec:
  containers:
    - name: app
      envFrom:
        - secretRef:
            name: db-credentials    # All keys become env vars with the same name
            optional: false

Verify env vars inside the Pod:

kubectl exec -it my-pod -- env | grep DB_
# DB_USERNAME=myuser
# DB_PASSWORD=mysecretpassword

Fix 4: Mount Secret as Volume Files

For TLS certificates, config files, or any multi-line secrets:

spec:
  volumes:
    - name: db-creds-volume
      secret:
        secretName: db-credentials     # Secret to mount
        defaultMode: 0400              # Read-only for owner (recommended for secrets)
        items:                         # Optional: select specific keys
          - key: password
            path: db-password.txt      # Filename inside the container
          - key: username
            path: db-username.txt

  containers:
    - name: app
      volumeMounts:
        - name: db-creds-volume
          mountPath: /secrets          # Directory inside the container
          readOnly: true
# Verify inside the container
kubectl exec -it my-pod -- ls /secrets
# db-password.txt
# db-username.txt

kubectl exec -it my-pod -- cat /secrets/db-password.txt
# mysecretpassword  ← Plain text (Kubernetes decodes base64 automatically)

Mount all Secret keys (no items filter):

volumes:
  - name: all-creds
    secret:
      secretName: db-credentials
      # No 'items' — all keys become files named after their key
# In the container:
ls /secrets
# username   password   (file names match Secret keys)

Fix 5: Fix RBAC Blocking Secret Access

In clusters with restricted RBAC, Pods may lack permission to access Secrets. The Kubelet reads Secrets when mounting — but if the ServiceAccount has explicit Deny rules or lacks the right Role, mounting fails:

# Check if the ServiceAccount can access the Secret
kubectl auth can-i get secret/db-credentials \
  --namespace my-namespace \
  --as system:serviceaccount:my-namespace:my-service-account

# yes → access is allowed
# no → RBAC is blocking

Create a Role that grants Secret access:

# role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: secret-reader
  namespace: my-namespace
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["db-credentials"]  # Only this specific Secret
    verbs: ["get"]

---
# rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: secret-reader-binding
  namespace: my-namespace
subjects:
  - kind: ServiceAccount
    name: my-service-account
    namespace: my-namespace
roleRef:
  kind: Role
  apiRef: secret-reader
  apiGroup: rbac.authorization.k8s.io
kubectl apply -f role.yaml -f rolebinding.yaml

Fix 6: Handle Secret Updates — Mounted Volumes vs Env Vars

Kubernetes updates mounted Secret volumes automatically when the Secret changes (with a small delay). But environment variables from Secrets are NOT updated — they’re set at Pod start and remain static:

# Update a Secret
kubectl patch secret db-credentials -n my-namespace \
  --type='json' \
  -p='[{"op": "replace", "path": "/data/password", "value": "'$(echo -n "newpassword" | base64)'"}]'

# Pods using volume mounts — Secret auto-updates within ~1 minute
# Pods using envFrom/env — still have the OLD value until Pod restarts

Force Pod restart after Secret update:

# Rollout restart — updates Pods one by one (zero-downtime)
kubectl rollout restart deployment/my-app -n my-namespace

# Verify new Pods have the updated value
kubectl exec -it $(kubectl get pod -l app=my-app -n my-namespace -o jsonpath='{.items[0].metadata.name}') \
  -- env | grep DB_PASSWORD

Use volume mounts (not env vars) for secrets that rotate — volume-mounted Secrets update automatically. Environment variables require a Pod restart.

Fix 7: Debug Secret Mounting Failures

Check Pod events for Secret errors:

kubectl describe pod my-pod -n my-namespace
# Look for events at the bottom:
# Warning  Failed  3s  kubelet  Error: secret "db-credentials" not found
# Warning  Failed  3s  kubelet  MountVolume.SetUp failed for volume "creds-volume":
#          secret "db-credentials" not found

Check if Secret data is correctly decoded:

# Decode a Secret value directly
kubectl get secret db-credentials -n my-namespace \
  -o jsonpath='{.data.password}' | base64 --decode

# Compare with what's mounted in the Pod
kubectl exec -it my-pod -- cat /secrets/password

# Both should match

Secret created with wrong key name:

# List the actual keys in the Secret
kubectl get secret db-credentials -o jsonpath='{.data}' | python3 -c "
import sys, json
data = json.load(sys.stdin)
print('Keys:', list(data.keys()))
"
# Keys: ['Password', 'Username']   ← Capital P — doesn't match 'password' in secretKeyRef
# Fix: match the exact key name from the Secret
env:
  - name: DB_PASSWORD
    valueFrom:
      secretKeyRef:
        name: db-credentials
        key: Password   # ← Capital P to match the Secret's actual key

Use External Secrets Operator for secrets from AWS/GCP/Vault:

# ExternalSecret — syncs from AWS Secrets Manager to Kubernetes Secret
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
  namespace: my-namespace
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: aws-secrets-manager
    kind: ClusterSecretStore
  target:
    name: db-credentials      # Creates/updates this Kubernetes Secret
    creationPolicy: Owner
  data:
    - secretKey: password     # Key in the Kubernetes Secret
      remoteRef:
        key: myapp/database   # AWS Secrets Manager path
        property: password    # JSON field in the secret

Still Not Working?

Secret exists but Pod can’t find it — check for namespace selector issues in network policies or admission webhooks that might be blocking the Kubelet’s Secret fetch.

TLS Secret format — Kubernetes TLS Secrets must use specific key names:

kubectl create secret tls my-tls-secret \
  --cert=tls.crt \    # Must be named tls.crt in the Secret
  --key=tls.key       # Must be named tls.key in the Secret

If you manually create a TLS Secret with different key names, nginx Ingress or cert-manager may not find the certificate.

imagePullSecrets — for private container registries, the pull Secret must be in the same namespace as the Pod AND referenced in the Pod spec:

spec:
  imagePullSecrets:
    - name: registry-credentials  # Must exist in the same namespace
  containers:
    - image: my-private-registry.example.com/app:latest

For related Kubernetes issues, see Fix: Kubernetes OOMKilled and Fix: Kubernetes ConfigMap Not Updating.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles