Fix: Go Concurrent Map Read and Write Panic — fatal error: concurrent map
Quick Answer
How to fix Go's concurrent map read and write panic — using sync.RWMutex, sync.Map, atomic operations, and structuring code to avoid shared state.
The Error
A Go program panics with a concurrent map access error:
fatal error: concurrent map read and map write
goroutine 7 [running]:
runtime.throw2({0x5e4c5e?, 0x0?})
/usr/local/go/src/runtime/panic.go:1023 +0x57 fp=0xc000051f38 sp=0xc000051f08 pc=0x43cee7
runtime.mapaccess1_faststr(...)
/usr/local/go/src/runtime/map_faststr.go:31 +0x2a5
goroutine 1 [runnable]:
main.main()
/tmp/sandbox/main.go:18 +0x88Or the less common but equally fatal:
fatal error: concurrent map writesOr the race detector catches it before it panics:
go run -race main.go
# WARNING: DATA RACE
# Write at 0x00c00001e390 by goroutine 7:
# runtime.mapassign_faststr(...)
# Previous read at 0x00c00001e390 by goroutine 1:
# main.main()Why This Happens
Go’s built-in map type is not safe for concurrent use. Reading and writing (or writing and writing) a map from multiple goroutines simultaneously causes a panic — not a data race that silently corrupts data, but an immediate crash.
This design is intentional: Go’s runtime detects concurrent map access and panics rather than allowing silent data corruption. The detection is not guaranteed to catch every race, but when it does, it crashes fast.
Common scenarios that trigger this:
- HTTP handler goroutines sharing a map — each request spawns a goroutine; if they all write to a shared
map, concurrent writes are inevitable under load. - Background goroutine updating a cache map — a cache goroutine writes while request handlers read.
go func()in a loop sharing the outer map — loop body starts goroutines that reference the enclosing scope’s map.- Sync mechanisms applied incorrectly — locking before reading but not before writing, or using the wrong lock.
Fix 1: Protect with sync.RWMutex
sync.RWMutex allows multiple concurrent readers OR one exclusive writer — the standard solution for read-heavy maps:
// WRONG — bare map accessed from multiple goroutines
var cache = make(map[string]string)
func setCache(key, value string) {
cache[key] = value // Concurrent write — race condition
}
func getCache(key string) string {
return cache[key] // Concurrent read — also causes panic
}// CORRECT — protected with RWMutex
import "sync"
type SafeCache struct {
mu sync.RWMutex
items map[string]string
}
func NewSafeCache() *SafeCache {
return &SafeCache{items: make(map[string]string)}
}
func (c *SafeCache) Set(key, value string) {
c.mu.Lock() // Exclusive lock for write
defer c.mu.Unlock()
c.items[key] = value
}
func (c *SafeCache) Get(key string) (string, bool) {
c.mu.RLock() // Shared lock for read — allows concurrent readers
defer c.mu.RUnlock()
val, ok := c.items[key]
return val, ok
}
func (c *SafeCache) Delete(key string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.items, key)
}Common Mistake: Using
sync.Mutex(notRWMutex) for a read-heavy cache.sync.Mutexis exclusive for all operations — readers block other readers.sync.RWMutex.RLock()lets multiple goroutines read simultaneously, only blocking when a write occurs.
Embedding the mutex in the struct (standard pattern):
type RequestCounter struct {
sync.RWMutex // Embedded — use as c.Lock(), c.RLock(), etc.
counts map[string]int
}
func (c *RequestCounter) Increment(route string) {
c.Lock()
defer c.Unlock()
c.counts[route]++
}
func (c *RequestCounter) Get(route string) int {
c.RLock()
defer c.RUnlock()
return c.counts[route]
}Fix 2: Use sync.Map for Concurrent Access
sync.Map is a built-in concurrent map optimized for specific access patterns — when keys are written once and read many times (like a read-heavy cache):
import "sync"
var cache sync.Map // Zero value is usable — no initialization needed
// Store — concurrent-safe write
cache.Store("user:42", userObject)
// Load — concurrent-safe read
val, ok := cache.Load("user:42")
if ok {
user := val.(User) // Type assertion — sync.Map stores interface{}
fmt.Println(user.Name)
}
// LoadOrStore — atomic get-or-set
actual, loaded := cache.LoadOrStore("user:42", newUser)
// loaded = true if the key already existed
// actual = the existing value (if loaded) or newUser (if stored)
// Delete
cache.Delete("user:42")
// Range — iterate (snapshot is not taken, may miss concurrent writes)
cache.Range(func(key, value any) bool {
fmt.Printf("%v: %v\n", key, value)
return true // Return false to stop iteration
})When to use sync.Map vs sync.RWMutex + map:
| Use case | Use |
|---|---|
| Write once, read many (cache) | sync.Map |
| Keys known at startup, only reads concurrent | sync.Map |
| Frequent writes + reads | sync.RWMutex + map (better performance) |
| Need to iterate atomically | sync.RWMutex + map |
| Need complex operations (check-then-set) | sync.RWMutex + map |
sync.Map avoids contention by using separate internal structures for “dirty” (recently written) and “clean” (read-stable) data. For write-heavy workloads, a plain map with sync.RWMutex is often faster.
Fix 3: Use Channels to Serialize Map Access
Instead of protecting a map with a mutex, serialize all access through a single goroutine using channels — the “share memory by communicating” approach:
type cacheRequest struct {
key string
value string // Non-empty for Set operations
response chan string
isGet bool
}
type MapActor struct {
data map[string]string
reqs chan cacheRequest
}
func NewMapActor() *MapActor {
a := &MapActor{
data: make(map[string]string),
reqs: make(chan cacheRequest, 100), // Buffered channel
}
go a.run() // Single goroutine owns the map
return a
}
func (a *MapActor) run() {
for req := range a.reqs {
if req.isGet {
req.response <- a.data[req.key]
} else {
a.data[req.key] = req.value
}
}
}
func (a *MapActor) Set(key, value string) {
a.reqs <- cacheRequest{key: key, value: value}
}
func (a *MapActor) Get(key string) string {
ch := make(chan string, 1)
a.reqs <- cacheRequest{key: key, isGet: true, response: ch}
return <-ch
}This pattern eliminates all locking — the map is only ever accessed by the single run() goroutine. Callers communicate via channels.
Simpler for write-only patterns:
// Write-only channel actor — log aggregation, metrics, etc.
type MetricsCollector struct {
events chan string
counts map[string]int
}
func NewMetricsCollector() *MetricsCollector {
mc := &MetricsCollector{
events: make(chan string, 1000),
counts: make(map[string]int),
}
go mc.aggregate()
return mc
}
func (mc *MetricsCollector) aggregate() {
for event := range mc.events {
mc.counts[event]++ // Only this goroutine writes — no lock needed
}
}
func (mc *MetricsCollector) Record(event string) {
mc.events <- event // Non-blocking send to buffered channel
}Fix 4: Detect Races with the Race Detector
The Go race detector catches concurrent map accesses (and other data races) before they cause panics in production:
# Run tests with race detector
go test -race ./...
# Run application with race detector
go run -race main.go
# Build a race-detecting binary (for staging/testing)
go build -race -o myapp-race ./...
./myapp-raceMake race detection part of CI:
# .github/workflows/test.yml
- name: Run tests with race detector
run: go test -race -timeout 60s ./...The race detector uses ~5–10x more CPU and memory, so don’t run it in production. Run it in tests and staging.
Race detector output:
WARNING: DATA RACE
Write at 0x00c000126050 by goroutine 8:
main.writeToCache()
/tmp/main.go:15 +0x5c
Previous read at 0x00c000126050 by goroutine 6:
main.readFromCache()
/tmp/main.go:22 +0x44
Goroutine 8 (running) created at:
main.main()
/tmp/main.go:30 +0x104
Goroutine 6 (running) created at:
main.main()
/tmp/main.go:28 +0xccThe output shows the exact file/line of the conflicting accesses and where the goroutines were created.
Fix 5: Avoid Shared State with Per-Goroutine Maps
The cleanest solution is to avoid sharing maps between goroutines entirely. If each goroutine has its own map, no synchronization is needed:
// WRONG — sharing a map across goroutines
func processRequests(requests []Request) {
results := make(map[string]Result) // Shared map
var wg sync.WaitGroup
for _, req := range requests {
wg.Add(1)
go func(req Request) {
defer wg.Done()
result := processRequest(req)
results[req.ID] = result // Concurrent write — race condition
}(req)
}
wg.Wait()
}
// CORRECT — each goroutine has its own result, collected afterward
func processRequests(requests []Request) map[string]Result {
type indexedResult struct {
id string
result Result
}
resultsCh := make(chan indexedResult, len(requests))
var wg sync.WaitGroup
for _, req := range requests {
wg.Add(1)
go func(req Request) {
defer wg.Done()
result := processRequest(req)
resultsCh <- indexedResult{id: req.ID, result: result}
}(req)
}
// Close channel when all goroutines are done
go func() {
wg.Wait()
close(resultsCh)
}()
// Collect results in a single goroutine — no shared state
results := make(map[string]Result, len(requests))
for r := range resultsCh {
results[r.id] = r.result // Only this goroutine writes to results
}
return results
}Fix 6: Shard Large Maps to Reduce Contention
For very high-throughput scenarios, a single mutex around a large map becomes a bottleneck. Sharding distributes the lock contention across multiple smaller maps:
const shardCount = 32
type ShardedMap struct {
shards [shardCount]struct {
sync.RWMutex
m map[string]any
}
}
func NewShardedMap() *ShardedMap {
sm := &ShardedMap{}
for i := range sm.shards {
sm.shards[i].m = make(map[string]any)
}
return sm
}
func (sm *ShardedMap) shard(key string) int {
// Simple hash — distribute keys across shards
h := fnv.New32a()
h.Write([]byte(key))
return int(h.Sum32()) % shardCount
}
func (sm *ShardedMap) Set(key string, value any) {
s := sm.shard(key)
sm.shards[s].Lock()
defer sm.shards[s].Unlock()
sm.shards[s].m[key] = value
}
func (sm *ShardedMap) Get(key string) (any, bool) {
s := sm.shard(key)
sm.shards[s].RLock()
defer sm.shards[s].RUnlock()
v, ok := sm.shards[s].m[key]
return v, ok
}With 32 shards, lock contention is reduced by ~32x for uniformly distributed keys.
Still Not Working?
Panic occurs during map iteration — ranging over a map while another goroutine modifies it also causes a panic. Lock the entire iteration:
func (c *SafeCache) Keys() []string {
c.mu.RLock()
defer c.mu.RUnlock()
keys := make([]string, 0, len(c.items))
for k := range c.items { // Lock held for the entire range
keys = append(keys, k)
}
return keys
}Race on map inside a struct — even if the struct access is protected, direct access to the internal map from a goroutine that has a reference to the struct bypasses the lock:
// DANGEROUS — caller gets a reference to the internal map
func (c *SafeCache) RawMap() map[string]string {
return c.items // Caller can now access the map without the lock
}
// SAFE — return a copy
func (c *SafeCache) Snapshot() map[string]string {
c.mu.RLock()
defer c.mu.RUnlock()
copy := make(map[string]string, len(c.items))
for k, v := range c.items {
copy[k] = v
}
return copy
}For related Go concurrency issues, see Fix: Go Goroutine Leak and Fix: Go Context Deadline Exceeded.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Go Deadlock — all goroutines are asleep, deadlock!
How to fix Go channel deadlocks — unbuffered vs buffered channels, missing goroutines, select statements, closing channels, sync primitives, and detecting deadlocks with go race detector.
Fix: Go Test Not Working — Tests Not Running, Failing Unexpectedly, or Coverage Not Collected
How to fix Go testing issues — test function naming, table-driven tests, t.Run subtests, httptest, testify assertions, and common go test flag errors.
Fix: Go Generics Type Constraint Error — Does Not Implement or Cannot Use as Type
How to fix Go generics errors — type constraints, interface vs constraint, comparable, union types, type inference failures, and common generic function pitfalls.
Fix: Go Error Handling Not Working — errors.Is, errors.As, and Wrapping
How to fix Go error handling — errors.Is vs ==, errors.As for type extraction, fmt.Errorf %w for wrapping, sentinel errors, custom error types, and stack traces.