Fix: Java OutOfMemoryError – Java Heap Space, Metaspace, and GC Overhead
Quick Answer
How to fix Java OutOfMemoryError including heap space, Metaspace, GC overhead limit exceeded, and unable to create new native thread.
The Error
Your Java application crashes with one of these errors:
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.Arrays.copyOf(Arrays.java:3512)
at java.base/java.util.ArrayList.grow(ArrayList.java:237)
at java.base/java.util.ArrayList.addAll(ArrayList.java:590)
at com.example.service.DataProcessor.loadAll(DataProcessor.java:47)Or the Metaspace variant:
java.lang.OutOfMemoryError: Metaspace
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)Or the GC overhead error:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.base/java.util.HashMap.resize(HashMap.java:700)
at java.base/java.util.HashMap.putVal(HashMap.java:658)
at com.example.cache.InMemoryCache.put(InMemoryCache.java:93)Or the native thread error:
java.lang.OutOfMemoryError: unable to create new native thread
at java.base/java.lang.Thread.start0(Native Method)
at java.base/java.lang.Thread.start(Thread.java:802)
at java.base/java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:945)The JVM has exhausted one of its memory regions and cannot allocate more. The application either needs more memory, has a memory leak, or is configured with limits that are too low.
Why This Happens
The JVM divides memory into several regions, and each OutOfMemoryError variant points to a different region running out of space:
Heap space (-Xmx) is where all Java objects live. When you create objects faster than the garbage collector can reclaim them, or when live objects simply exceed the maximum heap size, the JVM throws OutOfMemoryError: Java heap space. The default maximum heap is typically 1/4 of the physical memory, or 256 MB on some JVM distributions, which is often too small for production workloads.
Metaspace replaced PermGen in Java 8. It stores class metadata: class definitions, method bytecode, constant pools, and annotations. Every time a class is loaded, Metaspace grows. By default, Metaspace has no hard upper limit (it grows until the OS runs out of memory), but if you set -XX:MaxMetaspaceSize too low, or if classloaders keep loading classes without releasing old ones, you get OutOfMemoryError: Metaspace.
GC overhead limit exceeded means the garbage collector is spending more than 98% of CPU time on garbage collection and recovering less than 2% of the heap on each cycle. The JVM throws this to prevent the application from making near-zero progress while burning CPU. This is almost always a symptom of the heap being too small for the live data set, or a memory leak filling the heap to capacity.
Unable to create new native thread is an OS-level limit, not a JVM heap issue. Each Java thread requires a native OS thread, which consumes stack memory (typically 512 KB to 1 MB per thread). If the process hits the OS thread limit (ulimit -u on Linux) or runs out of virtual address space for thread stacks, this error occurs.
Understanding which variant you are dealing with determines the fix. If your application is running in a container, the situation is more nuanced because Docker and Kubernetes impose their own memory limits on top of the JVM settings, which can cause the process to be killed before the JVM even throws an error. For container-related crashes, see Fix: Docker Container Exited with Code 137 (OOMKilled).
Fix 1: Increase Heap Space with -Xmx and -Xms
The most direct fix for OutOfMemoryError: Java heap space is to give the JVM more memory.
Set the maximum and initial heap size:
java -Xms512m -Xmx2g -jar myapp.jar-Xms512msets the initial heap to 512 MB. The JVM allocates this much memory at startup.-Xmx2gsets the maximum heap to 2 GB. The heap can grow up to this limit before throwingOutOfMemoryError.
Setting -Xms equal to -Xmx is a common production practice. It avoids the overhead of heap resizing at runtime and makes memory behavior more predictable:
java -Xms4g -Xmx4g -jar myapp.jarFor Spring Boot applications, set these via JAVA_OPTS or JAVA_TOOL_OPTIONS:
export JAVA_TOOL_OPTIONS="-Xms1g -Xmx4g"
java -jar myapp.jarJAVA_TOOL_OPTIONS is picked up automatically by the JVM without needing to modify launch scripts.
In Maven or Gradle:
# Maven
export MAVEN_OPTS="-Xmx2g"
mvn clean install
# Gradle
export GRADLE_OPTS="-Xmx2g"
gradle buildBuild tools themselves can run out of memory during compilation of large projects.
Real-world scenario: A team’s CI builds started failing with
OutOfMemoryError: Java heap spaceafter upgrading a dependency that pulled in a much larger transitive dependency graph. The Maven compiler needed more memory to process the expanded classpath. Adding-Xmx2gtoMAVEN_OPTSin the CI configuration fixed it without any code changes. This is separate from your application’s runtime heap.
How much heap to allocate depends on your workload. A good starting point is to monitor actual heap usage with -XX:+PrintGCDetails or -Xlog:gc* (Java 11+), then set -Xmx to roughly 1.5 to 2 times the peak live data set size. Overallocating wastes memory and can increase GC pause times, while underallocating causes frequent collections and eventual OOM.
Fix 2: Increase Metaspace Size
If you see OutOfMemoryError: Metaspace, the JVM has loaded more class metadata than the Metaspace limit allows.
Increase the Metaspace limit:
java -XX:MaxMetaspaceSize=512m -XX:MetaspaceSize=128m -jar myapp.jar-XX:MetaspaceSize=128msets the threshold at which a full GC is triggered to unload classes. This is not the initial allocation.-XX:MaxMetaspaceSize=512msets the hard upper limit. Once reached, the JVM throwsOutOfMemoryError: Metaspace.
Common causes of Metaspace exhaustion:
Classloader leaks. Web applications redeployed without a server restart often leak classloaders. Each redeployment loads all classes again, but the old classloader (and all its classes) cannot be garbage collected because something still holds a reference to it. In Tomcat, this is the most common cause of
OutOfMemoryError: Metaspaceafter several hot redeploys.Dynamic proxy generation. Frameworks like Hibernate, Spring AOP, and CGLIB create proxy classes at runtime. Large applications with thousands of entities and aspects can generate thousands of proxy classes.
Scripting engines and expression languages. Groovy, JSP compilation, and other scripting engines compile source code into classes at runtime. If scripts are compiled repeatedly (e.g., per request), Metaspace fills up.
To diagnose classloader leaks, use -verbose:class or -Xlog:class+load=info (Java 11+) to log every class loading event:
java -Xlog:class+load=info -jar myapp.jar 2>&1 | grep -c "defineClass"If the count keeps growing across redeployments, you have a classloader leak.
Fix 3: Fix GC Overhead Limit Exceeded
GC overhead limit exceeded means the garbage collector is working hard but barely freeing any memory. The application is stuck in a near-infinite loop of allocation and collection.
Short-term fix: increase the heap:
java -Xmx4g -jar myapp.jarThis buys time but does not fix the underlying problem if you have a memory leak.
Disable the GC overhead check (not recommended for production, but useful for debugging):
java -XX:-UseGCOverheadLimit -Xmx4g -jar myapp.jarThis lets the application continue running even when GC is consuming most of the CPU. It can help you capture a heap dump or identify the problem, but the application will be essentially unresponsive.
Switch to a different garbage collector. The G1 collector (default since Java 9) handles large heaps better than the older Parallel collector:
java -XX:+UseG1GC -Xmx4g -jar myapp.jarFor applications with very large heaps (8 GB+), consider ZGC or Shenandoah for low-latency collection:
# ZGC (Java 15+)
java -XX:+UseZGC -Xmx16g -jar myapp.jar
# Shenandoah (Java 12+, not available in all JDK distributions)
java -XX:+UseShenandoahGC -Xmx16g -jar myapp.jarThe real fix is to find and eliminate the memory leak causing the heap to fill up. See Fix 5 below.
Fix 4: Fix Unable to Create New Native Thread
This error is about OS-level thread limits, not JVM heap.
Check and increase the thread limit on Linux:
# Check current limits
ulimit -u # max user processes (includes threads)
ulimit -a # all limits
# Increase temporarily
ulimit -u 65535
# Increase permanently in /etc/security/limits.conf
# Add these lines:
# appuser soft nproc 65535
# appuser hard nproc 65535Reduce the stack size per thread to fit more threads into the available memory:
java -Xss512k -jar myapp.jarThe default stack size is typically 1 MB. Reducing it to 512 KB or even 256 KB lets you create roughly twice as many threads in the same memory footprint. Be cautious: if your application uses deep recursion, a small stack causes StackOverflowError.
Check how many threads your application is actually creating:
# Count threads for a running JVM process
jcmd <pid> Thread.print | grep -c "^\""
# Or use jstack
jstack <pid> | grep -c "^\""If you see thousands of threads, the root cause is usually a thread pool misconfiguration or unbounded thread creation. Fix the application code:
// BAD: creating a new thread per request
new Thread(() -> handleRequest(request)).start();
// GOOD: use a bounded thread pool
ExecutorService executor = Executors.newFixedThreadPool(50);
executor.submit(() -> handleRequest(request));
// BETTER: use a thread pool with a bounded queue and rejection policy
ThreadPoolExecutor executor = new ThreadPoolExecutor(
10, 50, 60L, TimeUnit.SECONDS,
new LinkedBlockingQueue<>(1000),
new ThreadPoolExecutor.CallerRunsPolicy()
);This pattern of unbounded resource creation is a common source of runtime errors across languages. If you are also seeing environment variable issues when configuring thread pool sizes, ensure your configuration is loaded before the thread pool is initialized.
Fix 5: Detect and Fix Memory Leaks
If increasing memory only delays the crash, you have a memory leak. Java has garbage collection, but objects that are still reachable (referenced) cannot be collected, even if your code no longer needs them.
Step 1: Capture a heap dump.
Generate a heap dump when OOM occurs automatically:
java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdump.hprof -Xmx2g -jar myapp.jarOr capture one manually from a running process:
# Using jmap
jmap -dump:live,format=b,file=/tmp/heapdump.hprof <pid>
# Using jcmd (preferred on modern JDKs)
jcmd <pid> GC.heap_dump /tmp/heapdump.hprofStep 2: Analyze the heap dump.
Open the dump in Eclipse Memory Analyzer (MAT):
- Download MAT from eclipse.org/mat.
- Open the
.hproffile. - Run the Leak Suspects report. MAT identifies objects that dominate a large portion of the heap.
- Inspect the Dominator Tree to see which objects hold the most memory and trace their reference chains back to GC roots.
Alternatively, use jvisualvm (bundled with JDK 8, available as a standalone download for later versions) to connect to a running JVM and monitor heap usage in real time:
jvisualvmIn VisualVM, you can perform heap dumps, view object allocation, and run a sampler to identify which classes are consuming the most memory.
Step 3: Use GC logging to monitor allocation over time.
# Java 11+
java -Xlog:gc*:file=/var/log/app-gc.log:time,uptime,level,tags -Xmx2g -jar myapp.jar
# Java 8
java -verbose:gc -Xloggc:/var/log/app-gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xmx2g -jar myapp.jarUpload the GC log to GCEasy or GCViewer for visualization. Look for a steadily rising baseline heap usage after each GC cycle, which indicates a leak.
Fix 6: Fix Common Memory Leak Patterns
Here are the most frequent causes of memory leaks in Java applications.
Static collections that grow without bounds:
// LEAK: static map grows forever
public class EventCache {
private static final Map<String, Event> cache = new HashMap<>();
public static void addEvent(String id, Event event) {
cache.put(id, event); // entries never removed
}
}Fix by using a bounded cache, weak references, or an eviction policy:
// FIX 1: Use a bounded cache with LRU eviction
private static final Map<String, Event> cache = new LinkedHashMap<>(100, 0.75f, true) {
@Override
protected boolean removeEldestEntry(Map.Entry<String, Event> eldest) {
return size() > 10_000;
}
};
// FIX 2: Use WeakHashMap (entries are GC'd when keys are no longer referenced)
private static final Map<String, Event> cache = new WeakHashMap<>();
// FIX 3: Use Caffeine or Guava Cache with size/time limits
private static final Cache<String, Event> cache = Caffeine.newBuilder()
.maximumSize(10_000)
.expireAfterWrite(Duration.ofMinutes(30))
.build();Unclosed resources (streams, connections, result sets):
// LEAK: InputStream never closed if an exception occurs
public byte[] readFile(String path) throws IOException {
FileInputStream fis = new FileInputStream(path);
byte[] data = fis.readAllBytes();
fis.close(); // never reached if readAllBytes() throws
return data;
}Fix with try-with-resources:
// FIX: try-with-resources guarantees close()
public byte[] readFile(String path) throws IOException {
try (FileInputStream fis = new FileInputStream(path)) {
return fis.readAllBytes();
}
}This applies to database connections, HTTP connections, JDBC ResultSets, and any AutoCloseable resource. Connection pool exhaustion from unclosed connections can also trigger OutOfMemoryError indirectly.
Classloader leaks in web applications:
When a web application is redeployed in Tomcat or Jetty, the old classloader should be garbage collected. But if any thread, static field, or JVM-level reference holds a reference to an object loaded by the old classloader, the entire classloader and all its loaded classes remain in Metaspace. Common culprits:
- ThreadLocal variables that store objects from the web application
- JDBC drivers registered with
DriverManager(a JVM-level static registry) - Logging frameworks that keep references to appenders loaded by the web application classloader
- Shutdown hooks registered on
Runtime
// FIX: Deregister JDBC drivers on undeploy
@WebListener
public class AppContextListener implements ServletContextListener {
@Override
public void contextDestroyed(ServletContextEvent sce) {
Enumeration<Driver> drivers = DriverManager.getDrivers();
while (drivers.hasMoreElements()) {
Driver driver = drivers.nextElement();
if (driver.getClass().getClassLoader() == getClass().getClassLoader()) {
try {
DriverManager.deregisterDriver(driver);
} catch (SQLException e) {
// log the error
}
}
}
}
}Listeners and callbacks that are never unregistered:
// LEAK: observer is never removed
public class Dashboard {
public Dashboard(EventBus bus) {
bus.register(this); // holds a reference to Dashboard forever
}
}Fix by unregistering when the object is no longer needed, or use weak references in the event bus implementation.
Fix 7: Configure Memory Limits in Docker Containers
When running Java in Docker, the container has a memory limit set by Docker or Kubernetes. If the JVM heap exceeds the container limit, the Linux OOM killer terminates the process (exit code 137) without the JVM ever getting a chance to throw OutOfMemoryError.
Set container memory and JVM heap together:
# docker-compose.yml
services:
app:
image: myapp:latest
deploy:
resources:
limits:
memory: 2g
environment:
JAVA_OPTS: "-Xms1g -Xmx1536m"The JVM heap (-Xmx) should be lower than the container limit because the JVM uses memory beyond the heap (Metaspace, thread stacks, native memory, direct byte buffers, code cache). A common rule is to set -Xmx to 70-80% of the container memory limit.
Use container-aware JVM settings (Java 10+):
java -XX:MaxRAMPercentage=75.0 -jar myapp.jarThis tells the JVM to use 75% of the detected container memory as the maximum heap. The JVM automatically detects Docker memory limits starting from Java 10 (backported to Java 8u191+).
# Verify the JVM sees the container limits correctly
docker run --memory=2g myapp:latest java -XX:+PrintFlagsFinal -version 2>&1 | grep MaxHeapSizeIf you are encountering issues getting your container to start at all, see Fix: docker compose up errors for troubleshooting the container startup process itself.
In Kubernetes, set both requests and limits:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: app
image: myapp:latest
resources:
requests:
memory: "1Gi"
limits:
memory: "2Gi"
env:
- name: JAVA_OPTS
value: "-XX:MaxRAMPercentage=75.0"Fix 8: Tune Garbage Collection for Large Heaps
For heaps larger than 4 GB, the default G1 collector may cause long pause times. Tuning GC can reduce memory pressure and delay or prevent OOM errors.
G1GC tuning for large heaps:
java -XX:+UseG1GC \
-XX:MaxGCPauseMillis=200 \
-XX:G1HeapRegionSize=16m \
-XX:InitiatingHeapOccupancyPercent=35 \
-Xmx8g \
-jar myapp.jar-XX:MaxGCPauseMillis=200tells G1 to aim for 200 ms pauses (it is a target, not a guarantee).-XX:G1HeapRegionSize=16msets the region size. Larger regions are more efficient for large heaps.-XX:InitiatingHeapOccupancyPercent=35starts concurrent marking earlier (default is 45). This gives the GC more time to reclaim memory before the heap fills up.
Enable GC logging in production so you can diagnose issues after the fact:
java -Xlog:gc*:file=/var/log/app/gc.log:time,uptime,level,tags:filecount=10,filesize=50m \
-Xmx8g -jar myapp.jarThis writes GC logs to rotating files (10 files, 50 MB each) so they do not consume unlimited disk space.
Still Not Working?
Identify which memory region is exhausted
Use jcmd to get a full breakdown of JVM memory usage:
jcmd <pid> VM.native_memory summaryThis requires starting the JVM with -XX:NativeMemoryTracking=summary:
java -XX:NativeMemoryTracking=summary -Xmx2g -jar myapp.jarThe output shows heap, Metaspace, thread stacks, code cache, GC overhead, and internal memory. This tells you exactly which region is consuming more than expected.
Pro Tip: If your heap dump is too large to open in Eclipse MAT, run
jmap -histo <pid> | head -30on the live process instead. This shows the top classes by instance count and byte size without generating a full dump, and it is often enough to identify the leak.
Off-heap memory leaks
If the JVM process uses far more memory than -Xmx would suggest, the leak may be in native memory. Common causes:
- Direct ByteBuffers (
ByteBuffer.allocateDirect()) allocated outside the heap. Limit them with-XX:MaxDirectMemorySize=256m. - JNI code or native libraries that allocate memory without going through the JVM.
- Memory-mapped files that hold large regions mapped into the process address space.
Use jcmd <pid> VM.native_memory detail for a fine-grained breakdown.
Application profiling
If heap dumps are not conclusive, use a profiler to track allocations over time:
- Java Flight Recorder (JFR): Built into the JDK since Java 11 (backported to Java 8u262+). Start a recording:
jcmd <pid> JFR.start duration=60s filename=/tmp/recording.jfrOpen the recording in JDK Mission Control to see allocation hot spots, memory usage trends, and GC behavior.
- async-profiler: A low-overhead sampling profiler that can track allocations:
./profiler.sh -e alloc -d 30 -f /tmp/alloc-flamegraph.html <pid>This generates a flame graph showing which code paths are allocating the most memory.
Check for known framework issues
Some frameworks have known patterns that trigger OOM:
- Hibernate: Loading large result sets without pagination (
query.list()on millions of rows) pulls everything into memory. UsesetMaxResults()andsetFirstResult(), or scroll withScrollableResults. - Jackson/JSON parsing: Parsing very large JSON documents into a DOM tree (
ObjectMapper.readTree()) requires the entire document in memory. Use streaming withJsonParserfor large inputs. - Apache POI: Reading large Excel files (
.xlsx) withXSSFWorkbookloads the entire file into memory. UseSXSSFWorkbookfor writing or the SAX-based event model for reading.
If your Java application loads dependencies dynamically and the class itself is not found, that is a different error entirely. See Fix: java.lang.ClassNotFoundException for resolving classpath issues. If you are troubleshooting a Node.js service alongside your Java backend, missing modules follow a similar diagnostic pattern described in Fix: Cannot find module.
Solo developer based in Japan. Every solution is cross-referenced with official documentation and tested before publishing.
Was this article helpful?
Related Articles
Fix: Java StackOverflowError – Infinite Recursion, Circular References, and Stack Size
How to fix java.lang.StackOverflowError caused by infinite recursion, circular references in toString or equals, JPA bidirectional relationships, and Spring circular dependencies.
Fix: Maven Could Not Resolve Dependencies - Failed to Read Artifact Descriptor
How to fix Maven's 'Could not resolve dependencies' and 'Failed to read artifact descriptor' errors caused by corrupted cache, proxy settings, missing repositories, and version conflicts.
Fix: Gradle Build Failed – Could Not Resolve Dependencies or Compilation Error
How to fix Gradle build failures including dependency resolution errors, compilation failures, incompatible Java versions, and daemon issues.
Fix: java.lang.ClassNotFoundException (Class Not Found at Runtime)
How to fix Java ClassNotFoundException at runtime by resolving missing dependencies, classpath issues, Maven/Gradle configuration, JDBC drivers, classloader problems, and Java module system errors.