DevOps | Cloud | Analytics | Open Source | Programming





How To Fix Kafka Error - "Memory Allocation Error"



In this post , we will see how to Fix Kafka Error - "failed; error='Cannot allocate memory' " or memory allocation error. At times when we restart Kafka application, it throws an error which looks like somewhat below in the the screen -


failed; error='Cannot allocate memory'

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::**commit_memory**(0x00007f4fe5cc0000, **65536**, 1) failed; error='Cannot allocate memory' (errno=12)
[thread 139980200949504 also had an error]

First thing first, note the highlighted info in the error messages. It has the clue for the issue. It is a memory exception which happens due to two reasons -

  • Kafka Messages have very high no. of Partitions
  • And the messages are kept in the system for a very very long duration. Or maybe the data retention period is quite on the higher side.
Kafka internally uses a concept of memory maps corresponding to each topic and partition . It is called vm.max_map_count. And there is an upper capping to that memory cap limit. Due to both of the above two reasons, the memory map overshoots the threshold limit which essentially generates the error or memory exception. Some additional information from the Kafka Official documentation and presented in a more comprehensive way -

  • The default value of vm.max_map_count is somewhere around 65535.
  • Each log or partition in the Kafka system requires 2 map areas assuming it hosts a single log segment. So if you have 10K partitions , you would need 2*10k = 20K map areas. Hence if you happen to have very high no. of partitions retained for a very long period , it is highly probable to trigger an OutOfMemoryError system failure.
To fix this ,

  • Keep eyeballing the map count in the monitoring
  • Check the current threshold value for vm.max_map_count

sysctl vm.max_map_count

Or you can also use


cat /proc/[kafka-pid]/maps | wc -l

  • Try increasing the value by editing the /etc/sysctl.conf file . Use the Modified value

vm.max_map_count=<MODIFIED_VALUE>

and then run to save & apply the changes.


sysctl -p

or alternatively you could use -


sysctl -w vm.max_map_count=<MODIFIED_VALUE>

Note :  This is an OS-level exception and hence you would need to perform this step in each of the Kafka Brokers. Hope this helps you to fix the Kafka Restart error.  

Other Interesting Reads -