Fix: Redis Cluster Not Working — MOVED, CROSSSLOT, or Connection Errors
Quick Answer
How to fix Redis Cluster errors — MOVED redirects, CROSSSLOT multi-key operations, cluster-aware client setup, hash tags for key grouping, and failover handling.
The Problem
Redis Cluster returns a MOVED error when running a command:
ReplyError: MOVED 7638 192.168.1.3:6379Or a multi-key operation fails with CROSSSLOT:
ReplyError: CROSSSLOT Keys in request don't hash to the same slotOr the client can’t connect to the cluster at all:
Error: connect ECONNREFUSED 127.0.0.1:6379
ClusterAllFailedError: Failed to refresh slots cache.Or after a node failure, the client stops working even though the cluster elected a new primary.
Why This Happens
Redis Cluster distributes keys across 16,384 hash slots spread over multiple nodes. Unlike standalone Redis, you can’t just point a client at one node and run all commands:
MOVEDerrors — the key belongs to a different node than the one you connected to. A cluster-unaware client (or one configured with a single node) can’t follow these redirects automatically.CROSSSLOTerrors — multi-key commands (MSET,MGET,DELwith multiple keys, Lua scripts, transactions) require all keys to be in the same hash slot. If they’re on different nodes, the cluster refuses the operation.- Client not cluster-aware — a regular Redis client pointed at one cluster node won’t handle slot routing. You need a client with cluster mode enabled.
- Cluster topology changed — after a failover or node addition, the client’s cached slot map is stale. The client needs to refresh its view of the cluster.
Fix 1: Use a Cluster-Aware Client
The most common root cause: using a non-cluster client against a cluster. Every Redis client library has a cluster mode:
// Node.js — ioredis
import Redis from 'ioredis';
// WRONG — single node client against a cluster
const client = new Redis({ host: '192.168.1.1', port: 6379 });
// CORRECT — cluster client with all known nodes as seeds
const cluster = new Redis.Cluster([
{ host: '192.168.1.1', port: 6379 },
{ host: '192.168.1.2', port: 6379 },
{ host: '192.168.1.3', port: 6379 },
], {
redisOptions: {
password: 'your-password',
},
// Retry on CLUSTERDOWN and during failover
clusterRetryStrategy: (times) => Math.min(times * 100, 3000),
// Number of keys to fetch per CLUSTER SLOTS call
slotsRefreshTimeout: 2000,
});
// Now all commands route to the correct node automatically
await cluster.set('key', 'value');
const val = await cluster.get('key');# Python — redis-py
from redis.cluster import RedisCluster
# WRONG — regular Redis client
import redis
client = redis.Redis(host='192.168.1.1', port=6379)
# CORRECT — cluster client
cluster = RedisCluster(
startup_nodes=[
{"host": "192.168.1.1", "port": 6379},
{"host": "192.168.1.2", "port": 6379},
],
decode_responses=True,
password="your-password",
)
cluster.set("key", "value")
val = cluster.get("key")// Go — go-redis
import "github.com/redis/go-redis/v9"
// CORRECT — cluster client
rdb := redis.NewClusterClient(&redis.ClusterOptions{
Addrs: []string{
"192.168.1.1:6379",
"192.168.1.2:6379",
"192.168.1.3:6379",
},
Password: "your-password",
})Fix 2: Fix CROSSSLOT Errors with Hash Tags
CROSSSLOT happens when multi-key commands span different hash slots. Use hash tags — the part of the key inside {} determines the slot:
// WRONG — these keys hash to different slots
await cluster.mset(
'user:1000:profile', JSON.stringify(profile),
'user:1000:settings', JSON.stringify(settings),
);
// ReplyError: CROSSSLOT Keys in request don't hash to the same slot
// CORRECT — hash tags force the same slot
// {user:1000} is the hash tag — all keys with this tag go to the same slot
await cluster.mset(
'{user:1000}:profile', JSON.stringify(profile),
'{user:1000}:settings', JSON.stringify(settings),
);
// Now MGET works too
const [profileData, settingsData] = await cluster.mget(
'{user:1000}:profile',
'{user:1000}:settings',
);How hash tags work:
Key: "order:12345" → hashes the entire key
Key: "{order}:12345" → hashes only "order"
Key: "{order:12345}" → hashes "order:12345"
Key: "{}order:12345" → {} is empty, hashes entire key
Key: "a{foo}b{bar}c" → hashes only "foo" (first {} wins)Pipeline with hash tags:
// Pipelining multi-key operations within the same slot
const pipeline = cluster.pipeline();
pipeline.set('{session:abc123}:data', sessionData);
pipeline.set('{session:abc123}:expires', expiry);
pipeline.expire('{session:abc123}:data', 3600);
await pipeline.exec();Transactions (MULTI/EXEC) in cluster mode:
// Transactions require all keys in the same slot
// Use hash tags to ensure this
const multi = cluster.multi();
multi.get('{user:1000}:balance');
multi.decrby('{user:1000}:balance', 100);
multi.incrby('{user:1000}:escrow', 100);
await multi.exec();Fix 3: Handle MOVED and ASK Redirects in Custom Code
If you’re not using a cluster-aware client, or you’re implementing low-level cluster support, handle redirect responses:
// ioredis handles MOVED automatically, but if you need manual control:
cluster.on('node error', (err, address) => {
console.error(`Redis node ${address} error:`, err);
});
// Force slot map refresh after topology changes
await cluster.refreshSlotsCache();
// Check cluster status
const info = await cluster.cluster('INFO');
console.log(info);
// cluster_state:ok
// cluster_slots_assigned:16384
// cluster_known_nodes:6Detecting and handling MOVED manually (low-level):
async function clusterGet(key) {
try {
return await client.get(key);
} catch (err) {
if (err.message.startsWith('MOVED')) {
// Parse: "MOVED <slot> <host>:<port>"
const [, slot, address] = err.message.split(' ');
const [host, port] = address.split(':');
// Connect to the correct node
const correctNode = new Redis({ host, port: parseInt(port) });
const value = await correctNode.get(key);
correctNode.disconnect();
return value;
}
throw err;
}
}Fix 4: Configure Connection Pooling and Failover
Production Redis Cluster setups need robust failover and retry configuration:
// ioredis — production-ready cluster config
const cluster = new Redis.Cluster(
[
{ host: 'redis-1', port: 6379 },
{ host: 'redis-2', port: 6379 },
{ host: 'redis-3', port: 6379 },
],
{
redisOptions: {
password: process.env.REDIS_PASSWORD,
tls: process.env.NODE_ENV === 'production' ? {} : undefined,
connectTimeout: 10000,
commandTimeout: 5000,
},
// Retry strategy during failover
clusterRetryStrategy(times) {
if (times > 10) return null; // Stop retrying after 10 attempts
return Math.min(100 * Math.pow(2, times), 10000);
},
// Refresh slot map periodically
enableOfflineQueue: true,
slotsRefreshInterval: 5000, // Refresh every 5 seconds
slotsRefreshTimeout: 2000,
// Read from replicas for read-heavy workloads
scaleReads: 'slave', // 'master' | 'slave' | 'all'
}
);
cluster.on('connect', () => console.log('Cluster connected'));
cluster.on('error', (err) => console.error('Cluster error:', err));
cluster.on('+node', (node) => console.log('Node added:', node.options.host));
cluster.on('-node', (node) => console.log('Node removed:', node.options.host));
cluster.on('node error', (err, node) => {
console.error('Node error on', node, ':', err);
});BullMQ / job queues with Redis Cluster:
// BullMQ requires ioredis Cluster client
import { Queue, Worker } from 'bullmq';
const connection = new Redis.Cluster([
{ host: 'redis-1', port: 6379 },
{ host: 'redis-2', port: 6379 },
]);
// BullMQ automatically uses hash tags to keep job data co-located
const queue = new Queue('emails', { connection });
const worker = new Worker('emails', processEmail, { connection });Fix 5: Lua Scripts in Cluster Mode
Lua scripts work in cluster mode only if all keys accessed are in the same slot:
// WRONG — Lua script accessing keys in different slots
const script = `
local value = redis.call('GET', KEYS[1])
redis.call('SET', KEYS[2], value)
return value
`;
await cluster.eval(script, 2, 'key1', 'key2');
// CROSSSLOT error if key1 and key2 are in different slots
// CORRECT — use hash tags to co-locate keys
await cluster.eval(script, 2, '{user:100}:source', '{user:100}:dest');
// With ioredis defineCommand
cluster.defineCommand('copyValue', {
numberOfKeys: 2,
lua: `
local value = redis.call('GET', KEYS[1])
if value then
redis.call('SET', KEYS[2], value)
end
return value
`,
});
// Call — all keys must have the same hash tag
await cluster.copyValue('{session:abc}:old', '{session:abc}:new');Fix 6: Testing with Redis Cluster Locally
Set up a local cluster for development:
# Option 1: Docker Compose with redis-cluster image
# docker-compose.yml
services:
redis-cluster:
image: grokzen/redis-cluster:7.0.10
environment:
IP: 0.0.0.0
INITIAL_PORT: 7000
MASTERS: 3
SLAVES_PER_MASTER: 1
ports:
- "7000-7005:7000-7005"# Option 2: Create a cluster manually with redis-cli
# Start 6 Redis instances first, then:
redis-cli --cluster create \
127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 \
127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
--cluster-replicas 1
# Verify cluster
redis-cli -p 7000 cluster info
redis-cli -p 7000 cluster nodes# Option 3: Use Upstash or Redis Cloud for managed cluster testing
# (no local setup required)Still Not Working?
CLUSTERDOWN — the cluster is degraded. Check how many masters are unreachable: a Redis Cluster requires a majority of masters to be available to accept writes. With 3 masters, losing 2 makes the cluster read-only (or completely unavailable). Check redis-cli -p 7000 cluster info for cluster_state.
Keys not distributing evenly — if most keys use the same hash tag, they all land in one slot on one node. This defeats the purpose of clustering. Verify that your hash tag strategy spreads load. Use redis-cli --cluster check <host>:<port> to view slot distribution.
ASK errors — unlike MOVED, ASK is temporary and indicates a slot is being migrated. A cluster-aware client handles ASK transparently. If you’re seeing persistent ASK errors, a resharding operation may be stuck.
Sentinel vs Cluster — Redis Sentinel provides high availability for a single-primary setup. Redis Cluster provides both HA and horizontal scaling. They’re different systems. If you just need failover without sharding, Sentinel may be simpler.
For related Redis issues, see Fix: Redis Connection Refused and Fix: Redis OOM Command Not Allowed.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: MySQL Replication Not Working — Replica Lag, Stopped Replication, or GTID Errors
How to fix MySQL replication issues — SHOW REPLICA STATUS errors, relay log corruption, GTID configuration, replication lag, skipping errors, and replica promotion.
Fix: Redis Pub/Sub Not Working — Messages Not Received by Subscribers
How to fix Redis Pub/Sub issues — subscriber not receiving messages, channel name mismatches, connection handling, pattern subscriptions, and scaling with multiple processes.
Fix: PostgreSQL "sorry, too many clients already"
How to fix PostgreSQL 'sorry, too many clients already' error — checking active connections, using connection pooling with PgBouncer, tuning max_connections, fixing ORM pool settings, and finding connection leaks.
Fix: BullMQ Not Working — Jobs Not Processing, Workers Not Starting, or Redis Connection Failing
How to fix BullMQ issues — queue and worker setup, Redis connection, job scheduling, retry strategies, concurrency, rate limiting, event listeners, and dashboard monitoring.