Java – When the jvm runs garbage collection, Kubernetes pod memory usage does not drop

When the jvm runs garbage collection, Kubernetes pod memory usage does not drop… here is a solution to the problem.

When the jvm runs garbage collection, Kubernetes pod memory usage does not drop

It’s hard for me to understand why my Java application slowly consumes all the memory available to the pods, causing Kubernetes to mark the pods as out of memory. The JVM (OpenJDK 8) starts with the following parameters:

-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2

I’m monitoring the memory used by pods as well as JVM memory and expect to see some correlation, such as pod memory used also going down after the main garbage collection. But I don’t see this. I’ve attached some charts below:

Pod memory:
enter image description here
Total JVM memory
enter image description here
Detailed breakdown of the JVM (sorry all colors look the same…) Thanks Kibana)
enter image description here

What I’ve been struggling with is, why didn’t pod memory drop when heap memory decreased significantly before 16:00?

Solution

It looks like you’re creating a pod with a resource limit of 1GB of memory.
You are setting -XX:MaxRAMFraction=2 This means that you are allocating 50% of the available memory to the JVM, which seems to match the Memory Limit you draw

The JVM then retains about 80% of what you draw in Memory Consumed

When you look at Memory Consumed, you don’t see internal garbage collection (as shown in the second image) because GC memory is freed back to the JVM but is still reserved by it.

Is there a possible memory leak in your Java application? Over time, it can cause more memory to be reserved until the JVM limit (512MB) is reached and your pod is terminated by OOM.

Related Problems and Solutions