Skip to content

Fix: kubectl – The Connection to the Server Was Refused or Context Not Found

FixDevs ·

Quick Answer

How to fix kubectl errors like 'connection refused', 'context not found', or 'unable to connect to the server' when managing Kubernetes clusters.

The Error

You run a kubectl command and hit one of these errors:

error: context "my-cluster" does not exist
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Unable to connect to the server: dial tcp 192.168.1.100:6443: connect: connection refused
error: no context exists with the name: "arn:aws:eks:us-east-1:123456789:cluster/production"
Unable to connect to the server: getting credentials: exec: executable aws not found

All of these point to the same general problem: kubectl either cannot find the cluster configuration it needs, or the cluster it is trying to reach is not available. The specific message tells you which part of the chain is broken — the kubeconfig file, the context, the credentials, or the cluster itself.

Why This Happens

kubectl relies on a configuration file (called a kubeconfig) to know where the Kubernetes API server lives and how to authenticate with it. By default, it looks at ~/.kube/config. Inside that file, there are three key sections:

  • Clusters — the API server addresses and their CA certificates.
  • Users — credentials (tokens, client certificates, or exec-based auth plugins) for each cluster.
  • Contexts — named combinations of a cluster, a user, and optionally a namespace. A context ties everything together.

When you run kubectl, it reads the current context from the kubeconfig and uses the associated cluster and user entries to connect. If any piece is missing, wrong, or expired, the command fails.

Here are the most common reasons:

  • The kubeconfig file is missing or empty. kubectl falls back to localhost:8080, where nothing is listening, and you get a “connection refused” error.
  • The KUBECONFIG environment variable points to a file that does not exist or to a file that does not contain the context you need.
  • You switched machines or shells and the kubeconfig was never copied over or sourced.
  • The cluster is not running. Minikube was stopped, your kind cluster was deleted after a Docker restart, or the remote cluster’s API server is down.
  • Your credentials expired. Cloud provider tokens (AWS, GCP, Azure) have a limited lifespan, and once they expire, kubectl cannot authenticate.
  • You deleted or renamed a context without updating the current-context field.
  • Multiple kubeconfig files exist but only some are included in the KUBECONFIG variable, so kubectl cannot see all your clusters.

Understanding which part is broken determines the fix. The sections below walk through each scenario.

Fix 1: Check Your Current Kubeconfig and Context

Start with the basics. See what kubectl thinks it is working with:

kubectl config view

This prints the entire kubeconfig (with sensitive values redacted). Look at the current-context field at the top. If it is empty or points to a context name that does not appear in the contexts list, that is your problem.

List all available contexts:

kubectl config get-contexts

The active context is marked with an asterisk (*). If you see the context you need in the list but it is not selected, switch to it:

kubectl config use-context my-cluster

If the context you need is not in the list at all, the kubeconfig does not contain it. You need to either add it manually or regenerate it from your cluster provider (see Fix 5 and Fix 6 below).

Verify the connection after switching:

kubectl cluster-info

If this returns the API server address without errors, you are connected.

Fix 2: Fix the KUBECONFIG Environment Variable

kubectl looks for configuration in this order:

  1. The --kubeconfig flag passed directly to the command.
  2. The KUBECONFIG environment variable.
  3. The default file at ~/.kube/config.

If KUBECONFIG is set but points to a non-existent file, kubectl silently falls back to an empty config and you get the localhost:8080 error. Check what it is set to:

echo $KUBECONFIG

If it is empty, kubectl is using the default path. Verify that file exists:

ls -la ~/.kube/config

If the file does not exist, you need to generate or copy one. If you know where your kubeconfig file is, point kubectl to it:

export KUBECONFIG=/home/user/.kube/my-cluster-config

Make it permanent by adding the export to your shell profile:

# For bash
echo 'export KUBECONFIG=/home/user/.kube/my-cluster-config' >> ~/.bashrc
source ~/.bashrc

# For zsh
echo 'export KUBECONFIG=/home/user/.kube/my-cluster-config' >> ~/.zshrc
source ~/.zshrc

If your KUBECONFIG is set but the file has a typo in the path or was accidentally deleted, either fix the path or unset the variable to fall back to the default:

unset KUBECONFIG

For related issues with environment variables not being picked up correctly, see Fix: Environment Variable Is Undefined.

Fix 3: Start Your Cluster (Minikube, kind, k3d)

If the kubeconfig and context look correct but the connection is refused, the cluster itself is probably not running. This is the single most common cause on development machines.

Minikube:

minikube status

If it shows Stopped or Nonexistent:

minikube start

Minikube automatically updates ~/.kube/config with the correct context when it starts. After starting, verify:

kubectl config use-context minikube
kubectl cluster-info

kind (Kubernetes in Docker):

kind clusters run as Docker containers. If Docker was restarted, the cluster containers are gone.

kind get clusters

If your cluster is not listed, recreate it:

kind create cluster --name my-cluster

If the cluster is listed but Docker shows the containers as stopped, delete and recreate:

kind delete cluster --name my-cluster
kind create cluster --name my-cluster

kind clusters do not persist across Docker daemon restarts. This is by design.

k3d:

k3d cluster list
k3d cluster start my-cluster

Docker Desktop Kubernetes:

  1. Open Docker Desktop.
  2. Go to Settings > Kubernetes.
  3. Make sure Enable Kubernetes is checked.
  4. If it is already enabled but the status indicator is red or orange, click Reset Kubernetes Cluster.

Then set the context:

kubectl config use-context docker-desktop

If your local cluster was recently working and suddenly stopped, a machine reboot or Docker update is almost always the cause. For other connection issues on localhost, see Fix: ERR_CONNECTION_REFUSED on localhost.

Fix 4: Fix a Wrong or Missing Context

The “context not found” error means the kubeconfig’s current-context field references a context name that does not exist in the file. This happens when you delete a context, rename it, or merge kubeconfig files that overwrote each other.

See what the current context is set to:

kubectl config current-context

If this prints a name that does not appear in kubectl config get-contexts, the reference is stale.

Set the current context to one that exists:

kubectl config use-context <valid-context-name>

If you need to manually create a context (for example, you have the cluster and user entries but no context linking them):

kubectl config set-context my-new-context \
  --cluster=my-cluster \
  --user=my-user \
  --namespace=default

kubectl config use-context my-new-context

To see what clusters and users are defined in the kubeconfig (so you know what values to use):

kubectl config view -o jsonpath='{range .clusters[*]}{.name}{"\n"}{end}'
kubectl config view -o jsonpath='{range .users[*]}{.name}{"\n"}{end}'

To remove a stale context that points to a cluster you no longer use:

kubectl config delete-context old-cluster-context

Fix 5: Refresh Cloud Provider Credentials

Cloud-managed Kubernetes clusters (EKS, GKE, AKS) use short-lived tokens for authentication. When these expire, kubectl returns Unauthorized or fails to execute the credential plugin. The fix is to regenerate the kubeconfig entry from the cloud provider CLI.

AWS EKS:

aws eks update-kubeconfig --region us-east-1 --name my-cluster

This updates ~/.kube/config with a fresh context, cluster, and user entry. If you get an error about missing AWS credentials, fix those first:

aws sts get-caller-identity

If that fails, you need to configure your AWS CLI session. Run aws configure or log in with SSO:

aws sso login --profile my-profile

Make sure the IAM principal (user or role) you are authenticated as has permission to access the EKS cluster. The cluster creator has admin access by default, but other users need to be added to the aws-auth ConfigMap. For general AWS credential problems, see Fix: SSH Connection Timed Out which covers network-level debugging that applies to cloud connectivity.

Google Cloud GKE:

gcloud container clusters get-credentials my-cluster \
  --region us-central1 \
  --project my-project-id

If your gcloud session has expired:

gcloud auth login

For application-default credentials used by automation:

gcloud auth application-default login

Azure AKS:

az aks get-credentials --resource-group my-rg --name my-cluster

If your Azure session has expired:

az login

For Azure AD-integrated clusters, you may need to clear the cached token:

kubelogin remove-tokens
az aks get-credentials --resource-group my-rg --name my-cluster

After running any of these commands, verify the connection:

kubectl get nodes

Fix 6: Merge Multiple Kubeconfig Files

If you work with multiple clusters, you likely have multiple kubeconfig files. kubectl only sees the files listed in the KUBECONFIG variable (or the single default file). If a cluster’s config is in a separate file that is not included, kubectl cannot find its context.

Combine multiple kubeconfig files at runtime:

# Linux / macOS (colon-separated)
export KUBECONFIG=~/.kube/config:~/.kube/config-eks-prod:~/.kube/config-gke-staging

# Windows PowerShell (semicolon-separated)
$env:KUBECONFIG = "$HOME\.kube\config;$HOME\.kube\config-eks-prod"

With this set, kubectl config get-contexts shows contexts from all the listed files.

Merge all files into a single permanent kubeconfig:

# Back up the original first
cp ~/.kube/config ~/.kube/config.bak

# Merge
KUBECONFIG=~/.kube/config:~/.kube/config-eks-prod:~/.kube/config-gke-staging \
  kubectl config view --flatten > ~/.kube/config-merged

# Replace the original
mv ~/.kube/config-merged ~/.kube/config

Verify the merge worked:

kubectl config get-contexts

You should now see all clusters listed. Switch to the one you need:

kubectl config use-context <context-name>

Be careful with merging. If two files define a context with the same name but different clusters, one will overwrite the other. Check for conflicts before merging by inspecting each file’s context names.

Why this matters: The kubeconfig file contains credentials that give full access to your clusters. A corrupted merge, an accidental deletion, or a misconfigured KUBECONFIG variable can lock you out of production. Always back up ~/.kube/config before modifying it, and use kubectl config view to verify the result after any change.

Fix 7: The API Server Is Unreachable (Network Issues)

If the kubeconfig, context, and credentials are all correct but you still get connection refused or timeouts, the problem is between you and the API server.

Test basic connectivity:

# Get the API server address from your kubeconfig
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'

# Test if the port is reachable
curl -k https://<server-address>/healthz

An ok response means the server is up and reachable. A timeout means something is blocking the connection.

Common network blockers:

  1. Firewall or security group rules. Cloud clusters often restrict API server access to specific IP ranges. If your IP changed (e.g., you switched networks), you may be blocked. Check the cluster’s allowed CIDR ranges in your cloud provider console.

  2. VPN not connected. Many production clusters are only reachable through a VPN. If you normally connect over a VPN and it is disconnected, the API server is unreachable.

  3. Proxy settings. Corporate proxies intercept HTTPS traffic and can break the certificate chain. Exclude your cluster from proxy routing:

export NO_PROXY=$NO_PROXY,<cluster-ip>,<cluster-hostname>,.eks.amazonaws.com
export no_proxy=$no_proxy,<cluster-ip>,<cluster-hostname>,.eks.amazonaws.com
  1. Private clusters. EKS, GKE, and AKS all support private API server endpoints that are only reachable from within the VPC. If your cluster has a private endpoint enabled and public access disabled, you must connect from within the cloud network (e.g., through a bastion host or VPN). For issues connecting to remote servers in general, see Fix: SSH Connection Timed Out.

  2. DNS resolution failures. The cluster hostname may not resolve from your current network:

nslookup <cluster-hostname>

If it does not resolve, try using the IP address directly (check the cluster details in your cloud console) or fix your DNS configuration.

Fix 8: Kubeconfig File Is Corrupted or Has Invalid YAML

If the kubeconfig file has a syntax error, kubectl cannot parse it and shows confusing errors. This sometimes happens after a bad merge or manual edit.

Validate the kubeconfig:

kubectl config view

If this throws a YAML parse error, the file is malformed. Open it and look for common issues:

# Check for obvious YAML problems
cat ~/.kube/config

Common corruption patterns:

  • Duplicate keys from a bad merge (two contexts: sections instead of a merged list).
  • Broken base64 in certificate-authority-data or client-certificate-data fields (truncated or containing newlines).
  • Tabs instead of spaces. YAML does not allow tabs for indentation.
  • Missing quotes around values that contain special characters.

If the file is beyond repair, back it up and regenerate:

mv ~/.kube/config ~/.kube/config.broken

Then regenerate from your cluster provider (see Fix 5) or copy a known-good version from another machine.

If you encounter Docker Compose connectivity issues alongside Kubernetes problems (common when running both locally), see Fix: Docker Compose Up Errors.

Fix 9: kubectl Version Mismatch

kubectl has a version skew policy: it supports clusters within one minor version above or below its own version. If your kubectl is version 1.26 and the cluster is running 1.30, some features may not work and authentication mechanisms may have changed.

Check versions:

kubectl version --client
kubectl version

The second command also shows the server version (if the connection works). If there is a large gap, update kubectl:

# Using curl (Linux)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# Using Homebrew (macOS)
brew upgrade kubectl

# Using Chocolatey (Windows)
choco upgrade kubernetes-cli

A version mismatch is rarely the sole cause of “context not found” errors, but it can cause subtle authentication failures, especially with newer exec-based credential plugins that older kubectl versions do not support.

Still Not Working?

Common Mistake: Setting KUBECONFIG to a file that does not exist. kubectl does not warn you — it silently falls back to an empty config and connects to localhost:8080, which gives a confusing “connection refused” error that has nothing to do with your actual cluster. Always verify the file exists with ls after setting the variable.

Multiple clusters and losing track of contexts

If you manage many clusters, consider using a context-switching tool to avoid mistakes:

# kubectx -- fast context switching
kubectx my-cluster

# Or use kubectl directly with the --context flag to avoid switching
kubectl get pods --context=my-cluster

You can also set the context per terminal window without affecting other sessions:

export KUBECONFIG=~/.kube/config
kubectl config use-context staging-cluster
# This only affects the current shell

The context exists but the cluster entry is missing

Sometimes a context references a cluster name that was removed from the kubeconfig. Check:

kubectl config view -o jsonpath='{.contexts[?(@.name=="my-context")].context.cluster}'

Then verify that cluster name exists:

kubectl config view -o jsonpath='{range .clusters[*]}{.name}{"\n"}{end}'

If the cluster entry is gone, you need to re-add it. The easiest way is to regenerate the kubeconfig from your cloud provider (Fix 5) or re-create the local cluster (Fix 3).

kubectl works with sudo but not without

If sudo kubectl get nodes works but kubectl get nodes does not, the kubeconfig file permissions are the issue:

sudo chown $(id -u):$(id -g) ~/.kube/config
chmod 600 ~/.kube/config

This makes the file readable only by your user, which is also a security best practice. Kubeconfig files contain credentials and should not be world-readable.

Pods are stuck after fixing the connection

Once you can connect to the cluster again, you might notice pods in a bad state. If pods are in CrashLoopBackOff, the issue predates your connection problem — the pods were failing while you could not see them. See Fix: Kubernetes Pod CrashLoopBackOff for a thorough walkthrough of diagnosing and fixing crashing pods.

Clean slate: regenerate everything

If nothing else works and you are on a development machine, start fresh:

# Back up the old config
mv ~/.kube/config ~/.kube/config.old

# For minikube
minikube delete
minikube start

# For kind
kind delete cluster
kind create cluster

# For cloud clusters, regenerate from the provider CLI
aws eks update-kubeconfig --region us-east-1 --name my-cluster
# or
gcloud container clusters get-credentials my-cluster --region us-central1 --project my-project
# or
az aks get-credentials --resource-group my-rg --name my-cluster

Then verify:

kubectl config get-contexts
kubectl get nodes

This gives you a clean kubeconfig with only the clusters you explicitly added, eliminating any stale contexts, expired certificates, or corrupted entries.


Related: If your kubectl connects but you are getting connection refused errors from services inside the cluster, see Fix: The Connection to the Server localhost:8080 Was Refused (kubectl) for API server-specific troubleshooting.

F

FixDevs

Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.

Was this article helpful?

Related Articles